C language > Expert questions

Look for a sqlite3 replacement

(1/2) > >>

Grincheux:
I have 5546 files that I would transform into a big table.
Each file has a size of 13 Mb.
Each file has 87 000 records.
482 502 000 total records!
If I use sqlite I don't know if it can hold these files but it will be too slow.
Is there an other library for that?
I tried Berkeley DB but no...
FireBird is too slow.

TimoVJL:
Can you describe what kind of table you are creating?
 - field types, indexes
Any small set of test material to test?

Grincheux:
That would look like to these structures :


--- Quote ---typedef struct tagNASA_OBSERVER
{
   SYSTEMTIME   Date ;

//       1 = '*'  Daylight (refracted solar upper-limb on or above apparent horizon)
//       2 = 'C'  Civil twilight/dawn
//       3 = 'N'  Nautical twilight/dawn
//       4 = 'A'  Astronomical twilight/dawn
//       5 = ' '  Night OR geocentric ephemeris

   int         SolarPresence ;

//        1 = 'm'  Refracted upper-limb of Moon on or above apparent horizon
//       2 =  ' '  Refracted upper-limb of Moon below apparent horizon OR geocentric ephemeris

   int         LunarPresence ;
   char      Cnst[4] ;

   double      RA ;
   double      DE ;
   double      Ob_lon ;
   double      Ob_lat ;
   double      Sl_lon ;
   double      Sl_lat ;
   double      hEcl_Lon ;
   double      hEcl_Lat ;
   double      ObsEcLon ;
   double      ObsEcLat ;
   double      GlxLon ;
   double      GlxLat ;
} NASA_OBSERVER, *LPNASA_OBSERVER ;

typedef struct tagNASA_ELEMENTS
{
   SYSTEMTIME      Date ;

   double         Epoch ;                     // JDTDB
   double         Eccentricity ;               // e
   double         PeriapsisDistance ;            // q
   double         Inclination ;               // i
   double         LongitudeAscendingNode ;      //
   double         ArgumentPerifocus ;            // w
   double         TimePeriapsis ;               // Tp
   double         MeanMotion ;               // n
   double         MeanAnomaly ;               // M
   double         TrueAnomaly ;               // nu
   double         SemiMajorAxis ;               // a
   double         ApoapsisDistance ;            //
   double         SiderealOrbitPeriod ;         //
} NASA_ELEMENTS, *LPNASA_ELEMENTS ;

typedef struct tagNASA_VECTORS
{
   SYSTEMTIME      Date ;

   double         JulianDate ;

   double         X ;
   double         Y ;
   double         Z ;
   double         VX ;
   double         VY ;
   double         VZ ;
   double         LT ;
   double         RG ;
   double         RR ;
} NASA_VECTORS, *LPNASA_VECTORS ;

--- End quote ---

Each record contains these structures.

Index will be a DWORD SPK_ID but I want to be able to make queries based on the date (julian date), the spk_id, the star name, or de right ascension and the declinaison. In fact I would made queries on all the structures OBSERVER and ELEMENTS.
Example : I want to know for a date which are the stars with RA in rang [...] and DE in range [...]. The query could be made for all the date or for a month without year distinction.

I get result from NASA every day and I add new stars. For the OBSERVER I have result from 1900 to 2020, my goal is from 1900 to 2050.
I have a record every 12 hours. My graphics would be better if I could insert results hour per hour.

For the ELEMENTS, I have result with 48 hours step. Would like to have 1 hour step.

Not very easy.

Grincheux:
There is an other solution it is to use the SPICE TOOLKIT from NASA.
I Tried it but I don't understand anything. Who knows it? Whit it No need to use a huge db. :-*

frankie:
The efficiency of a database is only in small part related to its base code.
The real point is its organization.
I.e. building large tables holding all data is not efficient, each record search requires time extensive disk operations to move along records.
A good approach could be to organize the database in multiple small related tables connected with foreign keys. Give a read here, but many more info can be found. Google around.
Create indexes for those small tables, and retrieve big data only for really wanted records.
Not all queries have same efficiency, try googliing "writing efficient queries".

Navigation

[0] Message Index

[#] Next page

Go to full version