Hi Emi,
Databases that comply to the ACID standard (
http://en.wikipedia.org/wiki/ACID) ensure that that are no data loss by first writing the data changes to the database log in opposition to updating the actual data on the filesystem first (on the datafiles).
Each database has its own way of doing it, but it basically consists of writing the data to the logfile at each COMMIT and writing the data to the datafile only when it's necessary.
So the COMMIT command is a way of telling the database to write the data changes to the logfile.
Both logfiles and datafiles resides on the filesystem, but why writing to the logfile is faster?
It is because the logfile is written sequentially, while the datafile is totally dispersed and may even be fragmented.
Resuming: autocommit false is faster because you avoid going to the hard disk to write the changes into the logfile, you keep them in RAM memory until you decide to write them to the logfile (at each 10K rows for instance).
Be aware that, eventually, you will need to write data to the logfile, so you can't avoid that. But usually the performance is better if you write X rows at a time to the logfile, rather than writing every and each row one by one (because of the hard disk writing overhead).
The number of rows you need to write to get a better performance will depend on your environment and is pretty much done by blind-testing the process. For millions of rows, I usually commit at each 10K or 50K rows.
Regards,
Felipe