NO

Author Topic: writing 12 Bits  (Read 7041 times)

czerny

  • Guest
writing 12 Bits
« on: January 26, 2015, 03:52:13 PM »
I have to write 12.bit codes to a file (lzw-compression), like in gif-files. I found infos, that these 12 bits are written little endian.

But the code I found so fare are not handling this in that way. So I am unsure.

Say I have two 12-bit codes (with 3 nibbles each):

Code: [Select]
1: aaaa bbbb cccc
2: dddd eeee ffff

is this written as

Code: [Select]
bbbb cccc aaaa eeee ffff dddd
or as

Code: [Select]
aaaa bbbb cccc dddd eeee ffff

Offline frankie

  • Global Moderator
  • Member
  • *****
  • Posts: 2113
Re: writing 12 Bits
« Reply #1 on: January 26, 2015, 05:16:30 PM »
Difficult to say not knowing what it is inteded for.
I.e. the assumption you made that the machine which read the code can consider nibbles means that 12bits is not the word size of that machine. The meaning of endianess on unknown siize machine is a big issue (it can have 6bits word in this case you have to save the first 6 bits than the next for little endian).
If the machine, or the code that use the data, is byte oriented maybe your first guess is the correct one...

If I well remember, from times where I wrote assembly to access floppies 12bits FAT, the trick was to read 3 bytes at time then mask the value to get the first 12 bits, than shift right by 12 to get the next 12 bits.
« Last Edit: January 26, 2015, 05:20:11 PM by frankie »
"It is better to be hated for what you are than to be loved for what you are not." - Andre Gide

czerny

  • Guest
Re: writing 12 Bits
« Reply #2 on: January 26, 2015, 05:37:47 PM »
The cpu is a 80x86 , nothing special. I have divided the 12-bit value in nibbles to explain the question.

Offline frankie

  • Global Moderator
  • Member
  • *****
  • Posts: 2113
Re: writing 12 Bits
« Reply #3 on: January 26, 2015, 05:50:02 PM »
The 'machine' is intended as abstract data consumer entity, but maybe the technique used for 12bits FAT could apply...
"It is better to be hated for what you are than to be loved for what you are not." - Andre Gide

Offline AlexN

  • Global Moderator
  • Member
  • *****
  • Posts: 394
    • Alex's Link Sammlung
Re: writing 12 Bits
« Reply #4 on: January 28, 2015, 09:40:09 AM »
perhaps help this;
Code: [Select]
#include <stdio.h>


typedef struct
{
    int val1:12;
    int val2:12;
} str12;

typedef union
{
    str12 str;
    unsigned char ch[3];
} un24;

un24 u;
int main(void)
{
    int i;
    u.str.val1 = 0xabc;
    u.str.val2 = 0xdef;
    for(i=0;i<3;i++)
    {
        printf("%02x ",u.ch[i]);
    }
    printf("\n");
}
;)
I think this is not dependent by the CPU (a 80x86 can not write 12 bits), but it depands by the compiler.
« Last Edit: January 28, 2015, 09:44:49 AM by AlexN »
best regards
 Alex ;)

czerny

  • Guest
Re: writing 12 Bits
« Reply #5 on: January 28, 2015, 10:43:12 AM »
The problem is not how to do it. There are many ways to do it.
The problem is how this is actually needed in the special lzw implementation of gif.

Offline frankie

  • Global Moderator
  • Member
  • *****
  • Posts: 2113
Re: writing 12 Bits
« Reply #6 on: January 28, 2015, 10:57:48 AM »
take a look to the GifLib project and here.
"It is better to be hated for what you are than to be loved for what you are not." - Andre Gide

czerny

  • Guest
Re: writing 12 Bits
« Reply #7 on: January 28, 2015, 06:59:16 PM »
Thank you! It was invisible for me. :(