ref:http://stackoverflow.com/questions/200384/constant-amortized-time
如果非要翻译成中文,我觉得摊算时间或均摊时间(注意,它和平均时间不同)。
--------------
Amortised time explained in simple terms:
If you do an operation say a million times, you don't really care about the worst-case or the best-case of that operation - what you care about is how much time is taken in total when you repeat the operation a million times.
So it doesn't matter if the operation is very slow once in a while, as long as "once in a while" is rare enough for the slowness to be diluted away. Essentially amortised time means "average time taken per operation, if you do many operations". Amortised time doesn't have to be constant; you can have linear and logarithmic amortised time or whatever else.
Let's take mats' example of a dynamic array, to which you repeatedly add new items. Normally adding an item takes constant time (that is, O(1)
). But each time the array is full, you allocate twice as much space, copy your data into the new region, and free the old space. Assuming allocates and frees run in constant time, this enlargement process takes O(n)
time where n is the current size of the array.
So each time you enlarge, you take about twice as much time as the last enlarge. But you've also waited twice as long before doing it! The cost of each enlargement can thus be "spread out" among the insertions. This means that in the long term, the total time taken for adding m items to the array is O(m)
, and so the amortised time (i.e. time per insertion) is O(1)
.
---------------------------------
ref: http://stackoverflow.com/questions/19650636/amortized-analysis
Expected time:
We make some assumptions and, based on these assumptions, we make statements about the running time.
Hash tables is one such example. We assume that the data is well-distributed, and claim that the running time of operations are O(1), whereas the worst-case running time for an operation is actually O(n).
Amortized time:
Even though one operation may take longer than some given time, the time across multiple operations will balance out to give the mentioned running time.
(Well-implemented) self-resizing arrays is one such example. When you insert, it takes O(n) to resize the array, but, across many inserts, each will take O(1) on average.