Java 中的 HashMap 采用链表法来解决哈希冲突(HashMap 原理),即具有相同桶下标的键值对使用一个链表储存。当链表变长时,查找和添加(需要确定 key 是否已经存在)都需要遍历这个链表,速度会变慢。JDK 1.8 后加入了链表转换为红黑树的机制,但是红黑树的转换并不是一个廉价的操作,只有当链表长度大于等于 TREEIFY_THRESHOLD 才会 treeify。
/*** The bin count threshold for using a tree rather than list for a* bin. Bins are converted to trees when adding an element to a* bin with at least this many nodes. The value must be greater* than 2 and should be at least 8 to mesh with assumptions in* tree removal about conversion back to plain bins upon* shrinkage.*/static final int TREEIFY_THRESHOLD = 8;
TREEIFY_THRESHOLD 默认为 8,putVal()
源码如下:
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1sttreeifyBin(tab, hash);
实际上并不是只要链表长度大于 8 就会 treeify。当 table.length(桶的个数)小于 MIN_TREEIFY_CAPACITY 时会优先扩容而不是转换为红黑树。
/*** The smallest table capacity for which bins may be treeified.* (Otherwise the table is resized if too many nodes in a bin.)* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts* between resizing and treeification thresholds.*/static final int MIN_TREEIFY_CAPACITY = 64;
MIN_TREEIFY_CAPACITY 默认为 64,treeifyBin()
源码大致是这样:
final void treeifyBin(Node<K,V>[] tab, int hash) {int n, index; Node<K,V> e;if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) // 大小不够,优先扩容resize();else if ((e = tab[index = (n - 1) & hash]) != null) {// ...}}
为什么是 8 ?
TREEIFY_THRESHOLD 的默认值 8 是如何确定的呢?源码中的注解其实给出了答案:
Because TreeNodes are about twice the size of regular nodes, we use them only when bins contain enough nodes to warrant use (see TREEIFY_THRESHOLD). And when they become too small (due to removal or resizing) they are converted back to plain bins. In usages with well-distributed user hashCodes, tree bins are rarely used. Ideally, under random hashCodes, the frequency of nodes in bins follows a Poisson distribution with a parameter of about 0.5 on average for the default resizing threshold of 0.75, although with a large variance because of resizing granularity. Ignoring variance, the expected occurrences of list size k are (exp(-0.5) * pow(0.5, k) / factorial(k)). The first values are:
treeify 过程会把原本的 Node 对象转化为 TreeNode 对象,而 TreeNode 大小是 Node 的两倍;除去内存的损耗,treeify 本身也是一个耗时的过程,并且在红黑树节点数小于等于 UNTREEIFY_THRESHOLD(默认为 6)时,红黑树又会重新转换为链表,如果出现频繁的相互转化,这是一笔不小的开销。
所以我们需要确定一个 k 值,链表长度大于等于 k 时,会转化为红黑树。k 不能太大,因为链表的长度过大会影响插入和查找效率;k 也不能太小,treeify 并不是一个廉价的操作,我们希望链表长度大于等于 k 的概率要足够小,这样就可以尽量避免 treeify。
链表长度为 k 的概率就是出现 k 个键值对都在同一个桶中的概率,假设 table 的长度为 m,也就是有 m 个桶,一个键值对落入每一个桶都是等概率的,求不同 k 对应的概率就是标准的泊松分布问题:
其中
通常情况下 HashMap 都是要经历扩容过程的,扩容后 table 的长度是原来的两倍,不妨以这个角度来考虑键值对的个数。我们知道当键值对个数大于等于加载因子(默认 0.75f)和当前 table 长度的乘积时,会发生扩容,所以刚刚完成扩容时:
要发生下一次扩容时:
求均值:
带入式(1),得出下表:
- X = 0: P = 0.60653066
- X = 1: P = 0.30326533
- X = 2: P = 0.07581633
- X = 3: P = 0.01263606
- X = 4: P = 0.00157952
- X = 5: P = 0.00015795
- X = 6: P = 0.00001316
- X = 7: P = 0.00000094
- X = 8: P = 0.00000006
- X > 8: P less than 1 in ten million
可以看到 8 个键值对同时存在于同一个桶的概率只有 0.00000006,比 8 大的概率更是小于千万分之一,就它了!