Compilers use a mix of jump tables and binary searches to execute the correct
case code in a
switch instruction.
I.e.
switch a
{
case 0:
...
break;
case 1:
...
break;
case 2:
...
break;
case 5:
...
break;
case 6:
...
break;
case 9:
...
break;
case 10:
...
break;
}
The code generated will use a binary search to approach the three sequences available: 0-2, 5-6, 9-10. Then will use a (pseudo)jump table to access in the range. I said pseudo jump table because it is not really a table, but a sequence of offsetting addresses and then jump that emulates a jump table mechanism. The problem is that actual CPU's that make intense use of instruction flow caching must reset the caches when jumping. For this reason happens that a binary search, with predictable jumps, works faster than a pure jump table.
When you have very large ranges, i.e. cases 0-999, instead of creating a very large jump table you can decide to reduce it to a fraction and reach inside cases using a binary search.
I.e. crate a table of 10 entries: 0-99, 100-199, 200-299, ..., 900-999.
Then search the subrange using a binary search and last access with a reduced jump table. In this case we used a
density of
0.1.
Also the reverse is possible, using binary search first and table jump after.
Result on optimization is on the quantity of used memory and/or execution speed (memory for tables, speed for binary search code).
Anyway using optimization reset the density to the default that is 0.5.
For a more indeep explanation you can read
article from Vlad Lazarenko.