gcc optimization freeks use big C-files and use term for that 'amalgamation'.
look here (5 MB).
There are some good arguments to split big sources into smaller modules:
- load and save times are faster
- modern IDEs can effectively handle multiple sources
- modules can be reused in other projects
- modules can be assigned to different team members
That is why splitting is being taught as "good practice".
"Real practice", however, is that:
- load and save times are completely irrelevant with modern hardware, especially for the tiny "modules" that get posted in forums
- some IDEs can't even perform simple searches across sources
- modules are so specific that they'll never be reused in other projects
- modules can't be assigned to different team members because there is only one coder.
Take, for example, the SqLite project. Imagine some Linux user found out that under certain conditions that are difficult to reproduce the output is, ehm, inverted. You guess that there is somewhere a problem with the endianness.
Your codebase consists of 96 C sources. Load it into POIDE and try to find the bug.Or, alternatively, load the complete C source into an editor that offers you a listbox with all the 73 matches for the term "endian" spread over 147,015 lines of code, as shown below.
Guess which strategy is better to find the bug...