iqm.benchmarks.optimization.qscore.QScoreConfiguration#

class iqm.benchmarks.optimization.qscore.QScoreConfiguration(*, benchmark: ~typing.Type[~iqm.benchmarks.benchmark_definition.Benchmark] = <class 'iqm.benchmarks.optimization.qscore.QScoreBenchmark'>, shots: int = 256, max_gates_per_batch: int | None = None, max_circuits_per_batch: int | None = None, calset_id: str | None = None, routing_method: ~typing.Literal['basic', 'lookahead', 'stochastic', 'sabre', 'none'] = 'sabre', physical_layout: ~typing.Literal['fixed', 'batching'] = 'fixed', use_dd: bool | None = False, dd_strategy: ~iqm.iqm_client.models.DDStrategy | None = None, num_instances: int, num_qaoa_layers: int = 1, min_num_nodes: int = 2, max_num_nodes: int | None = None, use_virtual_node: bool = True, use_classically_optimized_angles: bool = True, choose_qubits_routine: ~typing.Literal['naive', 'custom'] = 'naive', min_num_qubits: int = 2, custom_qubits_array: ~typing.Sequence[~typing.Sequence[int]] | None = None, qiskit_optim_level: int = 3, optimize_sqg: bool = True, seed: int = 1, REM: bool = False, mit_shots: int = 1000)#

Bases: BenchmarkConfigurationBase

Q-score configuration.

Parameters:
  • benchmark (Type[Benchmark]) –

  • shots (int) –

  • max_gates_per_batch (int | None) –

  • max_circuits_per_batch (int | None) –

  • calset_id (str | None) –

  • routing_method (Literal['basic', 'lookahead', 'stochastic', 'sabre', 'none']) –

  • physical_layout (Literal['fixed', 'batching']) –

  • use_dd (bool | None) –

  • dd_strategy (DDStrategy | None) –

  • num_instances (int) –

  • num_qaoa_layers (int) –

  • min_num_nodes (int) –

  • max_num_nodes (int | None) –

  • use_virtual_node (bool) –

  • use_classically_optimized_angles (bool) –

  • choose_qubits_routine (Literal['naive', 'custom']) –

  • min_num_qubits (int) –

  • custom_qubits_array (Sequence[Sequence[int]] | None) –

  • qiskit_optim_level (int) –

  • optimize_sqg (bool) –

  • seed (int) –

  • REM (bool) –

  • mit_shots (int) –

benchmark#

QScoreBenchmark

Type:

Type[Benchmark]

num_instances#

Number of random graphs to be chosen.

Type:

int

num_qaoa_layers#

Depth of the QAOA circuit. * Default is 1.

Type:

int

min_num_nodes#

The min number of nodes to be taken into account, which should be >= 2. * Default is 2.

Type:

int

max_num_nodes#

The max number of nodes to be taken into account, which has to be <= num_qubits + 1. * Default is None

Type:

int

use_virtual_node#

Parameter to increase the potential Qscore by +1. * Default is True.

Type:

bool

use_classically_optimized_angles#

Use pre-optimised tuned parameters in the QAOA circuit. * Default is True.

Type:

bool

choose_qubits_routine#

The routine to select qubit layouts. * Default is “custom”.

Type:

Literal[“custom”]

min_num_qubits#

Minumum number of qubits. * Default is 2

Type:

int

custom_qubits_array#

The physical qubit layouts to perform the benchmark on. If virtual_node is set to True, then a given graph with n nodes requires n-1 selected qubits. If virtual_node is set to False, then a given graph with n nodes requires n selected qubits. * Default is None.

Type:

Optional[Sequence[Sequence[int]]]

qiskit_optim_level#

The Qiskit transpilation optimization level. * Default is 3.

Type:

int

optimize_sqg#

Whether Single Qubit Gate Optimization is performed upon transpilation. * Default is True.

Type:

bool

seed#

The random seed. * Default is 1.

Type:

int

REM#

Use readout error mitigation. * Default is False.

Type:

bool

mit_shots#

(int): Number of shots used in readout error mitigation. * Default is 1000.

Type:

int

Attributes

model_config

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

benchmark

num_instances

num_qaoa_layers

min_num_nodes

max_num_nodes

use_virtual_node

use_classically_optimized_angles

choose_qubits_routine

min_num_qubits

custom_qubits_array

qiskit_optim_level

optimize_sqg

seed

REM

mit_shots

shots

max_gates_per_batch

max_circuits_per_batch

calset_id

routing_method

physical_layout

use_dd

dd_strategy

Methods

model_config: ClassVar[ConfigDict] = {}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].