NO

Author Topic: atexit  (Read 4566 times)

czerny

  • Guest
atexit
« on: June 25, 2012, 05:00:09 pm »
This is a little experiment. I would like to alloc memory which automatically gets freed at exit.
The code is very naked, no error checks.

Code: [Select]
#include <stdio.h>
#include <stdlib.h>
#include "free.h"

int main(int argc, char **argv)
{
void *x=NEW(x,20);
void *y=NEW(y,50);

atexit(freeall);

printf("in the middle\n");
return 1;
}

What problems do you can see.

CommonTater

  • Guest
Re: atexit
« Reply #1 on: June 25, 2012, 09:30:42 pm »
If you are talking about program exit, Windows will do that for you as part of it's cleanup process during a normal exit... no guarantees on an abnormal exit. 
 
atexit as you have it is a good way to clean up on the way out... To fill it out a bit take a look at the WinAPI's HeapWalk() and related functions.  One trick you can use is to create a private heap and then simply destroy it in atexit.

If you are talking about at the exit of a function you can take a look at THIS rudimentary garbage collector.
 
You can also do this with exception handling, try and finally...
The basic functions are in except.h or you can use the ErrorX library to make it a bit easier.
 
Code: [Select]
// program code
 
try
  {
       mymemory = malloc(2324578);   
       // do a bunch of stuff with allocated memory
   }
finally
  { 
     free(mymemory); 
     // this code is guaranteed to run
     // even if there's an error.
   }
 
// more program code

 
« Last Edit: June 26, 2012, 10:26:56 am by CommonTater »

iZzz32

  • Guest
Re: atexit
« Reply #2 on: June 26, 2012, 02:15:03 pm »
Quote
no guarantees on an abnormal exit.
It is wrong. Modern OS would always free memory no matter how process was terminated. So there is no any practical reason (except something called "good programming style") to free memory before exit.

CommonTater

  • Guest
Re: atexit
« Reply #3 on: June 26, 2012, 04:18:29 pm »
Quote
no guarantees on an abnormal exit.
It is wrong. Modern OS would always free memory no matter how process was terminated. So there is no any practical reason (except something called "good programming style") to free memory before exit.

I would contest that last statement... there's plenty of good reason to free memory as soon as you're done with it.  If you have a program that runs for months at a time (as I do) the accumulation of bits of memory that are never freed can cripple it's host system.  This is what "memory leaks" are about... programs that continuously accumulate more and more memory as they run.  No big deal for a 10 second directory sweep... a major freaking disaster for a server program.


Offline frankie

  • Global Moderator
  • Member
  • *****
  • Posts: 1713
Re: atexit
« Reply #4 on: June 26, 2012, 04:32:29 pm »
I suppose IZzz32 means that when a process terminates the OS deallocates all the process resources, also memory.
In your case seems that the process is still running and, maybe, only a thread aborts......

Offline Bitbeisser

  • Global Moderator
  • Member
  • *****
  • Posts: 761
Re: atexit
« Reply #5 on: June 26, 2012, 06:28:06 pm »
Quote
no guarantees on an abnormal exit.
It is wrong. Modern OS would always free memory no matter how process was terminated. So there is no any practical reason (except something called "good programming style") to free memory before exit.

I would contest that last statement... there's plenty of good reason to free memory as soon as you're done with it.  If you have a program that runs for months at a time (as I do) the accumulation of bits of memory that are never freed can cripple it's host system.  This is what "memory leaks" are about... programs that continuously accumulate more and more memory as they run.  No big deal for a 10 second directory sweep... a major freaking disaster for a server program.
+1

I would never take it as a guarantee that ANY OS is freeing memory on program termination. Normally they "should", but I think there are too many examples out there that this is not always the case.

And I would absolutely second it that leaving a program without properly cleaning up any previously allocated memory is (extremely) bad programming practice, just the way as it is not to close files as soon as they are not needed anymore...

Ralf

CommonTater

  • Guest
Re: atexit
« Reply #6 on: June 26, 2012, 07:57:45 pm »
I suppose IZzz32 means that when a process terminates the OS deallocates all the process resources, also memory.
In your case seems that the process is still running and, maybe, only a thread aborts......

In the particular example I had in mind, we have a server/monitor running on a large custom database (18gb in one case).  This program runs continuously.   There a number of PCs connected through a switch that access the database directly, using custom software, and can also ask the server to print purchase orders, invoices and various reports.  The server also supports an autonomous backup thread that is launched hourly.

This HAS to keep going or the whole thing comes crashing down.

A memory leak of only one record (2 to 4k) would eventually eat up all of the server machine's memory and the whole things comes to a rather abrupt end.

It may not be anything our crew here would encounter, but it does provide a solid object lesson in careful programming.

iZzz32

  • Guest
Re: atexit
« Reply #7 on: June 26, 2012, 08:05:22 pm »
Quote from: frankie
I suppose IZzz32 means that when a process terminates the OS deallocates all the process resources, also memory.
In your case seems that the process is still running and, maybe, only a thread aborts......
Exactly.

Quote from: Bitbeisser
I would never take it as a guarantee that ANY OS is freeing memory on program termination. Normally they "should", but I think there are too many examples out there that this is not always the case.
If you have x86-64 protected mode OS, it would free memory. Just because of the way the protected mode and paging works.
And for non-x86 CPUs there's one obvious reason for OS to do that too: if your program crashes and OS does not free its memory, you'll get a memory leak, and the whole system should be rebooted to free that memory. Nobody wants this. Well, almost nobody: there is realtime/embedded programming, operating systems without process isolation and so on.

Quote
without properly cleaning up any previously allocated memory is (extremely) bad programming practice, just the way as it is not to close files as soon as they are not needed anymore...
I disagree. In a long-living process, you should free resources as long as you don't need them. It is right, it is ok. But for example if your program just allocates memory, does some calculations and writes results into the output file, there's no any practical reason to close an output file (as exit() guarantees to fflush/fclose/whatever) and/or to free memory. Btw if you have a big data structure constructed from the small pieces (a tree, for example), OS would free it many times faster than you do).

And yes, it is considered "bad programming practice" just because if you always remember to free resources, chances are that you wouldn't forget to do that when it is really required.

CommonTater, I was talking only about freeing resources before exit. Anyone understands, that you should not just throw your pointer away and continue running.

CommonTater, ok, got it. If I'll work for a big company I would always do free() before exit(), memmove() instead of memcpy(), fflush() before fclose(), and of course, I'll never forget to repeat assignment expressions triple times… just to be sure. :P
« Last Edit: June 26, 2012, 08:32:52 pm by iZzz32 »

CommonTater

  • Guest
Re: atexit
« Reply #8 on: June 26, 2012, 08:15:22 pm »
I disagree. In a long-living process, you should free resources as long as you don't need them. It is right, it is ok. But for example if your program just allocates memory, does some calculations and writes results into the output file, there's no any practical reason to close an output file (as exit() guarantees to fflush/fclose/whatever) and/or to free memory. Btw if you have a big data structure constructed from the small pieces (a tree, for example), OS would free it many times faster than you do).

Have you ever worked on a large project where data loss can kill a company? 

In those situations the rules are extremely simple: Open file, write, flush, close... leaving an output file open means that any data sitting in memory buffers is just begging to be lost.  The same with memory... allocate, use, deallocate. 

In my example, this is inventory data sensitive enough that a single record not correctly updated can throw the company's books off by several hundred thousand dollars...

Quote
And yes, it is considered "bad programming practice" just because if you always remember to free resources, chances are that you wouldn't forget to do that when it is really required.

Well no... it's considered bad programming practice because entire banks and major corporations have suffered enormous losses over cavalier attitudes about data safety.