基于源码理解HashMap
(以下所有英文翻译为自己翻译,英语水平可能就这样,如果介意者可略过或者出门左转,谢谢)
HashMap特性
首先从HashMap类注释上理解HashMap是什么。
Hash table based implementation of the <tt>Map</tt> interface. This
* implementation provides all of the optional map operations, and permits
* <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt>
* class is roughly equivalent to <tt>Hashtable</tt>, except that it is
* unsynchronized and permits nulls.) This class makes no guarantees as to
* the order of the map; in particular, it does not guarantee that the order
* will remain constant over time.
HashMap是基于哈希表的Map接口实现。这个实现提供所有可选的映射操作,并且允许null值和null键。HashMap与Hashtable大致相同,除了他是非同步的和允许null值。这个类不保证映射的顺序,特别是,他不保证这个顺序会随时间保持不变。
This implementation provides constant-time performance for the basic
* operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function
* disperses the elements properly among the buckets. Iteration over
* collection views requires time proportional to the "capacity" of the
* <tt>HashMap</tt> instance (the number of buckets) plus its size (the number
* of key-value mappings). Thus, it's very important not to set the initial
* capacity too high (or the load factor too low) if iteration performance is
* important.
这个实现对于基本的get和put提供固定时间性能,假设哈希函数在桶之间正确的分散元素。迭代集合视图需要的时间是与HashMap实例的容量(桶的数量)加上其大小(键值映射的数量)成比例的时间。从而,如果迭代性能很重要的话,不要讲初始容量设置的太大或者加载因子太小是很重要的。
An instance of <tt>HashMap</tt> has two parameters that affect its
* performance: <i>initial capacity</i> and <i>load factor</i>. The
* <i>capacity</i> is the number of buckets in the hash table, and the initial
* capacity is simply the capacity at the time the hash table is created. The
* <i>load factor</i> is a measure of how full the hash table is allowed to
* get before its capacity is automatically increased. When the number of
* entries in the hash table exceeds the product of the load factor and the
* current capacity, the hash table is <i>rehashed</i> (that is, internal data
* structures are rebuilt) so that the hash table has approximately twice the
* number of buckets.
一个HashMap有两个参数能够影响他的性能:初始容量和加载因子。初始容量是哈希表中桶的数量,而且初始容量知识创建哈希表时的容量。加载因子是用来测量哈希表在容量增长之前允许获得多少。当哈希表中的实体数量超过了加载因子和当前容量的乘积,哈希表会 rehashed(内部数据结构重建),所以哈希表的桶的数量会变为大约两倍桶的数量。
As a general rule, the default load factor (.75) offers a good
* tradeoff between time and space costs. Higher values decrease the
* space overhead but increase the lookup cost (reflected in most of
* the operations of the <tt>HashMap</tt> class, including
* <tt>get</tt> and <tt>put</tt>). The expected number of entries in
* the map and its load factor should be taken into account when
* setting its initial capacity, so as to minimize the number of
* rehash operations. If the initial capacity is greater than the
* maximum number of entries divided by the load factor, no rehash
* operations will ever occur.
在一般的规则中,这个默认的加载因子(0.75)提供了很好的平衡在时间和空间花费上。较高的值会降低空间开销,但是会增加查找成本(之后额反映在大多数HashMap类的操作中,包括get和put)。期待的map中实体的数量和他的加载因子应该在设置初始容量的时候考虑到了这一点,以便最小化的做rehash操作。如果初始容量大于最大实体数除以加载因子,则不会进行rehash操作。
<p>If many mappings are to be stored in a <tt>HashMap</tt>
* instance, creating it with a sufficiently large capacity will allow
* the mappings to be stored more efficiently than letting it perform
* automatic rehashing as needed to grow the table. Note that using
* many keys with the same {@code hashCode()} is a sure way to slow
* down performance of any hash table. To ameliorate impact, when keys
* are {@link Comparable}, this class may use comparison order among
* keys to help break ties.
* <p>
* 如果很多映射被存储到HashMap实例中,创建一个足够大的容量比让他自动rehash去增长空间的存储方式更高效。
* 对于任何哈希表,如果大多数键的哈希值相同,他的性能会降低时必然的。当键是可一使用Comparable比较的可以改善碰撞,
* 这个类可以使用键之间的比较顺序来打破这个关系。
*
* <p><strong>Note that this implementation is not synchronized.</strong>
* If multiple threads access a hash map concurrently, and at least one of
* the threads modifies the map structurally, it <i>must</i> be
* synchronized externally. (A structural modification is any operation
* that adds or deletes one or more mappings; merely changing the value
* associated with a key that an instance already contains is not a
* structural modification.) This is typically accomplished by
* synchronizing on some object that naturally encapsulates the map.
* <p>
* If no such object exists, the map should be "wrapped" using the
* {@link Collections#synchronizedMap Collections.synchronizedMap}
* method. This is best done at creation time, to prevent accidental
* unsynchronized access to the map:<pre>
* Map m = Collections.synchronizedMap(new HashMap(...));</pre>
* <p>
* 注意,这个实现不是同步的,如果有多个线程并发的访问HashMap,而且至少有一个线程在修改映射结构,他的外部必须是同步的。
* (结构修改可以是任何的操作,比如添加或者删除一个或者多个映射;仅仅只是修改一个已经存在的实例的值和键的关系不是结构修改),
* 一些对象封装在映射上一般是通过同步完成的。
*
* <p>The iterators returned by all of this class's "collection view methods"
* are <i>fail-fast</i>: if the map is structurally modified at any time after
* the iterator is created, in any way except through the iterator's own
* <tt>remove</tt> method, the iterator will throw a
* {@link ConcurrentModificationException}. Thus, in the face of concurrent
* modification, the iterator fails quickly and cleanly, rather than risking
* arbitrary, non-deterministic behavior at an undetermined time in the
* future.
* <p>
* 这个迭代器返回“集合视图的方法”是满足fail-fast:如果映射结构在迭代器创建成功后的任何一个时间改变,
* 除了通过迭代器自己的remove方法外,这个迭代器还会抛出ConcurrentModificationException异常。
* 因此,面对当前的改变,迭代器会很快失败并且清除,而不是冒着风险,在未来不确定的时间做不确定的行为。
*
* <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
* as it is, generally speaking, impossible to make any hard guarantees in the
* presence of unsynchronized concurrent modification. Fail-fast iterators
* throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
* Therefore, it would be wrong to write a program that depended on this
* exception for its correctness: <i>the fail-fast behavior of iterators
* should be used only to detect bugs.</i>
* <p>
* 注意迭代器的fail-fast行为不能保证他的原样,一般来说,在存在不同步的并发修改时,不能做出任何的硬性保证。
* Fail-fast迭代器抛出异常是在尽力而为的基础上。因此,编写一个依赖于此异常的程序以确保其正确性是错误的:
* 迭代器的Fail-fast行为应该只被用于监测bugs。
通过上面的注释,我们可以很清楚的知道HashMap的实现机制,影响HashMap的因素有哪些以及Fail-fast机制的作用和用法等,接下来会一一介绍。
HashMap成员变量及其含义
从源码角度理解一个类,我比较喜欢从他的类的继承关系、成员变量以及一些常量开始,在理解了这些常量的作用以后,再去阅读其他代码。(关于HashMap类的继承关系,由于比较简单,读者可以自行阅读理解,在此不再赘述)。
HashMap中的常量
1、DEFAULT_INITIAL_CAPACITY
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
从字面意思理解是默认初始容量,并且必须是2的倍数(具体原因与hash的效率以及数据的分布有关),在用户创建HashMap不指定容量的情况下,使用默认初始容量。
2、MAXIMUM_CAPACITY
static final int MAXIMUM_CAPACITY = 1 << 30;
最大容量。当用户自己创建指定容量的HashMap的时候,会使用该字段。如果用户指定的容量大于最大容量,那么最终的容量为最大容量。
3、DEFAULT_LOAD_FACTOR
static final float DEFAULT_LOAD_FACTOR = 0.75f;
默认加载因子。当用户创建没有指定加载因子的时候,就会使用默认的加载因子。加载因子在容器进行扩容的时候会用到,用来判断容器的存储量是否达到了一个阈值。
4、TREEIFY_THRESHOLD
static final int TREEIFY_THRESHOLD = 8;
我理解我树化阈值。当一个槽的数量达到阈值的时候,会将链表转化为红黑树。
5、UNTREEIFY_THRESHOLD
static final int UNTREEIFY_THRESHOLD = 6;
反树化阈值。当红黑树中节点的个数小于该阈值的时候,会从红黑树转换为链表。
6、MIN_TREEIFY_CAPACITY
static final int MIN_TREEIFY_CAPACITY = 64;
最小树化容量。当容器的总容量大于64并且某一个槽达到树化阈值的时候,才会转换成为红黑树。
HashMap中的成员变量
1、用于存储节点数据信息
transient Node<K, V>[] table;
2、key-value映射实体个数
transient int size;
3、HashMap实体经过结构修改的次数(结构修改在上面的注释中有解释)。在进行遍历的时候会用到,如果当前修改的次数,不满足预期期望的次数,遍历就会抛出异常
transient int modCount;
4、加载因子
final float loadFactor;
5、resize的阈值
int threshold;
HashMap常用方法
put方法核心
/**
* Implements Map.put and related methods.
*
* @param hash hash for key
* @param key the key
* @param value the value to put
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K, V>[] tab;
Node<K, V> p;
int n, i;
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length; // 如果表为空,或者表的长度为0就进行resize
if ((p = tab[i = (n - 1) & hash]) == null) // 如果tab对应hash的位置为空,(会将该值赋值给p,后面会用到)
tab[i] = newNode(hash, key, value, null); // 申请一个节点,然后放入该位置即可
else { // 该hash的位置已经存在有元素
Node<K, V> e; // 用于存储新的节点
K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p; // 如果指定hash对应的元素的hash和key的hash相同并且完全相等,将该值保存到e
else if (p instanceof TreeNode) // 如果p是一个树节点,就用树的方式存储节点,并返回
e = ((TreeNode<K, V>) p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) { // 用来判断元素个数,是否需要将该链表树化
if ((e = p.next) == null) { // 判断是否有后继节点
p.next = newNode(hash, key, value, null); // 在在最后一个后继节点插入一个新的节点
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash); // 树化
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break; // 如果找到一个和key完全相同的key就直接退出
p = e; // 更换当前节点信息
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null) // 如果允许修改以前的值或者旧值为空
e.value = value; // 修改节点的值
afterNodeAccess(e);
return oldValue;
}
}
++modCount; // 增加结构修改次数
if (++size > threshold) // 如果当前的大小已经大于了resize的阈值,就进行resize
resize();
afterNodeInsertion(evict);
return null;
}
get方法核心
/**
* Implements Map.get and related methods.
*
* @param hash hash for key
* @param key the key
* @return the node, or null if none
*/
final Node<K, V> getNode(int hash, Object key) {
Node<K, V>[] tab;
Node<K, V> first, e;
int n;
K k;
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) { // 确保哈希表中有数据
if (first.hash == hash && // always check first node 如果第一个满足,就直接返回第一个节点
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) { // 除去都一个节点之后,还有其他数据
if (first instanceof TreeNode) // 判断是否转换成了树,是就遍历树
return ((TreeNode<K, V>) first).getTreeNode(hash, key);
do { // 否则遍历链表
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
resize方法
/**
* Initializes or doubles table size. If null, allocates in
* accord with initial capacity target held in field threshold.
* Otherwise, because we are using power-of-two expansion, the
* elements from each bin must either stay at same index, or move
* with a power of two offset in the new table.
*
* @return the table
*/
final Node<K, V>[] resize() {
Node<K, V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length; // 获得现有的容量
int oldThr = threshold; // 现在的阈值
int newCap, newThr = 0;
if (oldCap > 0) { //如果现在的容量大于0
if (oldCap >= MAXIMUM_CAPACITY) { // 现在的容量已经大于最大的容量
threshold = Integer.MAX_VALUE; // 修改阈值为Integer的最大值
return oldTab; // 返回旧的信息
} else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && // 将现在的容量扩大为两倍但是要小于最大的容量
oldCap >= DEFAULT_INITIAL_CAPACITY) // 现在的容量要大于默认初始化容量
newThr = oldThr << 1; // double threshold // 将阈值扩大为现在的两倍
} else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int) (DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float) newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float) MAXIMUM_CAPACITY ?
(int) ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes", "unchecked"})
Node<K, V>[] newTab = (Node<K, V>[]) new Node[newCap];
table = newTab;
if (oldTab != null) { // 原表不为空
for (int j = 0; j < oldCap; ++j) { // 一次遍历原表
Node<K, V> e;
if ((e = oldTab[j]) != null) { // 判断获得的元素不为空
oldTab[j] = null; // 将已经取出的元素置为空
if (e.next == null) // 如果他后面已经没有数据,就将该元素放入新表的对应位置
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode) // 如果他是树节点
((TreeNode<K, V>) e).split(this, newTab, j, oldCap);
else { // preserve order 链表就保持秩序
Node<K, V> loHead = null, loTail = null;
Node<K, V> hiHead = null, hiTail = null;
Node<K, V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
} else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
remove方法
/**
* Implements Map.remove and related methods.
*
* @param hash hash for key
* @param key the key
* @param value the value to match if matchValue, else ignored
* @param matchValue if true only remove if value is equal
* @param movable if false do not move other nodes while removing
* @return the node, or null if none
*/
final Node<K, V> removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
Node<K, V>[] tab;
Node<K, V> p;
int n, index;
if ((tab = table) != null && (n = tab.length) > 0 &&
(p = tab[index = (n - 1) & hash]) != null) { // 表不为空,hash的方式与存放时的方式一样
Node<K, V> node = null, e;
K k;
V v;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
node = p; // 找到元素,将该元素赋值给node
else if ((e = p.next) != null) { // 寻找下一个
if (p instanceof TreeNode) // 下面的数据是红黑树
node = ((TreeNode<K, V>) p).getTreeNode(hash, key); // 从树中查找元素
else { // 否则就是链表,就在链表中查找元素
do {
if (e.hash == hash &&
((k = e.key) == key ||
(key != null && key.equals(k)))) {
node = e;
break;
}
p = e;
} while ((e = e.next) != null);
}
}
if (node != null && (!matchValue || (v = node.value) == value ||
(value != null && value.equals(v)))) {
if (node instanceof TreeNode) // 从树中删除
((TreeNode<K, V>) node).removeTreeNode(this, tab, movable);
else if (node == p) // 从链表中删除(在第一个元素)
tab[index] = node.next;
else // 在链表中
p.next = node.next;
++modCount;
--size;
afterNodeRemoval(node);
return node;
}
}
return null;
}
Java8中HashMap的新方法
在Java8 中,修改了HashMap的一些方法,增加了一些更加方便的方法。
computeIfAbsent
/**
* 查找有没有指定key的元素,有就返回该元素的值,没有就按照mappingFunction的定义,将数据放入。
*
* @param key 指定元素的key
* @param mappingFunction 函数映射
* @return key对应的value
*/
@Override
public V computeIfAbsent(K key,
Function<? super K, ? extends V> mappingFunction) {
if (mappingFunction == null)
throw new NullPointerException(); // 判断指定的映射关系是否为空
int hash = hash(key); // 计算指定key值的hash值
Node<K, V>[] tab;
Node<K, V> first;
int n, i;
int binCount = 0;
TreeNode<K, V> t = null;
Node<K, V> old = null;
if (size > threshold || (tab = table) == null ||
(n = tab.length) == 0) // 如果表的大小大于阈值,或者表为空或者表长度为0
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null) { // 根据计算的hash值获得该hash值下第一个元素的值,并且不为空
if (first instanceof TreeNode)
old = (t = (TreeNode<K, V>) first).getTreeNode(hash, key); // 从树中查找
else {
Node<K, V> e = first;
K k;
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
old = e; // 找到了指定key的元素
break;
}
++binCount;
} while ((e = e.next) != null);
}
V oldValue;
if (old != null && (oldValue = old.value) != null) { // 找到了指定key的元素并且该元素的值不为空
afterNodeAccess(old);
return oldValue;
}
}
// 没有找到才会执行
V v = mappingFunction.apply(key);
if (v == null) {
return null;
} else if (old != null) { // 之前便找到了该元素,但是改元素的值为null
old.value = v;
afterNodeAccess(old);
return v;
} else if (t != null)
t.putTreeVal(this, tab, hash, key, v); // 该元素在树中
else {
tab[i] = newNode(hash, key, v, first); // 该元素不存在
if (binCount >= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
}
++modCount; // 记录修改结构的操作次数
++size;
afterNodeInsertion(true);
return v;
}
computeIfPresent
public V computeIfPresent(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException(); // 判断传入的函数是否为空
Node<K, V> e;
V oldValue; // 用于保存旧的值
int hash = hash(key); // 计算hash值
if ((e = getNode(hash, key)) != null && // 先查找key对应的value并且不等于null
(oldValue = e.value) != null) {
V v = remappingFunction.apply(key, oldValue);
if (v != null) {
e.value = v; // 修改value的值
afterNodeAccess(e);
return v;
} else
removeNode(hash, key, null, false, true); // 如果等于null就移出
}
return null;
}
compute
/**
* 先查找对应key下的元素,入股查找成功,且value不为null,就修改映射关系为remappingFunction的值,如果value为null就直接移出。
* 如果没有找到,并且remappingFunction的值不为null,就将值添加在数据后面。
*
* @param key key值
* @param remappingFunction 映射函数
* @return v
*/
@Override
public V compute(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K, V>[] tab;
Node<K, V> first;
int n, i;
int binCount = 0;
TreeNode<K, V> t = null;
Node<K, V> old = null;
if (size > threshold || (tab = table) == null || // 当前的大小大于当前阈值或者表为null或者表长度为0,进行resize操作
(n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null) { // 得到该key值hash下的第一个元素并且不为null
if (first instanceof TreeNode) // 元素是个树节点
old = (t = (TreeNode<K, V>) first).getTreeNode(hash, key); // 从树中查找元素
else { // 该元素在链表中
Node<K, V> e = first;
K k;
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
old = e; // 找到了对应的元素
break;
}
++binCount;
} while ((e = e.next) != null);
}
}
V oldValue = (old == null) ? null : old.value; // 如果没有找到对应的元素就为null,否则是该元素的value
V v = remappingFunction.apply(key, oldValue);
if (old != null) { // 如果找到了元素
if (v != null) {
old.value = v; // remappingFunction的value不为null,修改对应元素的value
afterNodeAccess(old);
} else // 如果remappingFunction的值为null。就移出节点
removeNode(hash, key, null, false, true);
} else if (v != null) { // 没有找到元素
if (t != null) // t不为空表示该槽已经变为树
t.putTreeVal(this, tab, hash, key, v);
else { // 链表,就将该元素放入的链表的最后
tab[i] = newNode(hash, key, v, first);
if (binCount >= TREEIFY_THRESHOLD - 1) // 如果放入后到达了树化的阈值,就将链表树化
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
}
return v;
}
merge
/**
* 功能与compute 大致相同,只不过传入的是两个参数,是根据两个value值进行逻辑处理。如果key有对应的元素,就更新他的value,如果更新后value为空就删除,
* 如果key没有对应的参数,并且传入的value也不为空,就直接添加元素
*
* @param key 需要进行合并的键值
* @param value 更新的value信息
* @param remappingFunction 更新方式
* @return 更新后的值
*/
@Override
public V merge(K key, V value,
BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
if (value == null)
throw new NullPointerException();
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K, V>[] tab;
Node<K, V> first;
int n, i;
int binCount = 0;
TreeNode<K, V> t = null;
Node<K, V> old = null;
if (size > threshold || (tab = table) == null ||
(n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null) {
if (first instanceof TreeNode)
old = (t = (TreeNode<K, V>) first).getTreeNode(hash, key); // 查找元素
else {
Node<K, V> e = first;
K k;
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
old = e;
break;
}
++binCount;
} while ((e = e.next) != null);
}
}
if (old != null) { // 如果找到了元素
V v;
if (old.value != null) // 找到的元素中有value值
v = remappingFunction.apply(old.value, value); // 使用新老的value作为入参
else
v = value; // 如果没有值,就直接使用传进来的value(新value)作为value
if (v != null) {
old.value = v; // 更新老的value的值
afterNodeAccess(old);
} else
removeNode(hash, key, null, false, true); // 如果value为空,就直接删除
return v; // 返回value
}
if (value != null) { // 如果没有找到元素,并且传入的value不为空,
if (t != null) // 槽中的数据已经变成了树
t.putTreeVal(this, tab, hash, key, value); // 放入数据,key就是传入的键,value是传入的value
else {
tab[i] = newNode(hash, key, value, first); // 申请一个新的节点,并放入到链表中
if (binCount >= TREEIFY_THRESHOLD - 1) // 判断是否满足树化的条件
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
}
return value;
}
这些就是作者目前理解的HashMap的东西,做个笔记,如果有错误的,欢迎各个大佬指出来,感激不尽!