Review Hashtable extract node API

Message ID 0a927e9b-97d5-5fc8-fb36-2af5848e017b@gmail.com
State New
Headers show
Series
  • Review Hashtable extract node API
Related show

Commit Message

François Dumont June 4, 2019, 5:19 p.m.
Hi

     Here is a patch to enhance the _Hashtable extract node API and fix 
a FIXME request.

     The enhancement to the extract node Api is that extract(const 
key_type&) do not call extract(const_iterator) anymore. Doing so we had 
to loop again through bucket nodes to find the previous node to the one 
to extract. Even if a bucket shall not contain many nodes (in unique key 
mode) it's easy to avoid it.

     To fix the FIXME I introduced a node smart pointer type managing 
the node lifetime. The node is extracted from this smart pointer only 
when there can't be any exception raised. In the context of the node 
extract api the node handle is considered as a smart pointer. So the 
node handle will remain owner of the node in case of exception when 
reinserting it, I hope it is the expected behavior.

     * include/bits/hashtable_policy.h
     (struct _NodeSmartPointer<_NodeAlloc>): New.
     (_Map_base<>::operator[](const key_type&)): Use latter, adapt.
     (_Map_base<>::operator[](key_type&&)): Likewise.
     * include/bits/hashtable.h
     (_Hashtable<>::__node_sp_t): New.
     (_Hashtable<>::_M_insert_unique_node(size_type, __hash_code,
     __node_type*, size_type)): Replace by...
(_Hashtable<>::_M_insert_unique_node<_NodeAccessor>(const key_type&,
     size_type, __hash_code, const _NodeAccessor&, size_type)): ...that.
     (_Hashtable<>::_M_insert_multi_node(__node_type*, __hash_code,
     __node_type*)): Replace by...
(_Hashtable<>::_M_insert_multi_node<_NodeAccessor>(__node_type*,
     __hash_code, const _NodeAccessor&)): ...that.
     (_Hashtable<>::_M_reinsert_node): Adapt.
     (_Hashtable<>::_M_reinsert_node_multi): Adapt.
     (_Hashtable<>::_M_extract_node(size_t, __node_base*)): New.
     (_Hashtable<>::extract(const_iterator)): Use latter.
     (_Hashtable<>::extract(const _Key&)): Likewise.
     (_Hashtable<>::_M_merge_unique): Adapt.
     (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Adapt.
     (_Hashtable<>::_M_emplace<_Args>(const_iterator, false_type,
     _Args&&...)): Adapt.

Tested under Linux x86_64.

Ok to commit ?

François

Comments

Jonathan Wakely June 5, 2019, 4:22 p.m. | #1
On 04/06/19 19:19 +0200, François Dumont wrote:
>Hi

>

>    Here is a patch to enhance the _Hashtable extract node API and fix 

>a FIXME request.

>

>    The enhancement to the extract node Api is that extract(const 

>key_type&) do not call extract(const_iterator) anymore. Doing so we 

>had to loop again through bucket nodes to find the previous node to 

>the one to extract. Even if a bucket shall not contain many nodes (in 

>unique key mode) it's easy to avoid it.


Nice.

>    To fix the FIXME I introduced a node smart pointer type managing 

>the node lifetime. The node is extracted from this smart pointer only 

>when there can't be any exception raised. In the context of the node 

>extract api the node handle is considered as a smart pointer. So the 

>node handle will remain owner of the node in case of exception when 

>reinserting it, I hope it is the expected behavior.


Yes, that's right.

I was going to suggest just using the node handle type instead of
inventing a new smart pointer, but the handle type uses std::optional
so isn't available for C++11/14.


>    * include/bits/hashtable_policy.h

>    (struct _NodeSmartPointer<_NodeAlloc>): New.

>    (_Map_base<>::operator[](const key_type&)): Use latter, adapt.

>    (_Map_base<>::operator[](key_type&&)): Likewise.

>    * include/bits/hashtable.h

>    (_Hashtable<>::__node_sp_t): New.

>    (_Hashtable<>::_M_insert_unique_node(size_type, __hash_code,

>    __node_type*, size_type)): Replace by...

>(_Hashtable<>::_M_insert_unique_node<_NodeAccessor>(const key_type&,

>    size_type, __hash_code, const _NodeAccessor&, size_type)): ...that.

>    (_Hashtable<>::_M_insert_multi_node(__node_type*, __hash_code,

>    __node_type*)): Replace by...

>(_Hashtable<>::_M_insert_multi_node<_NodeAccessor>(__node_type*,

>    __hash_code, const _NodeAccessor&)): ...that.

>    (_Hashtable<>::_M_reinsert_node): Adapt.

>    (_Hashtable<>::_M_reinsert_node_multi): Adapt.

>    (_Hashtable<>::_M_extract_node(size_t, __node_base*)): New.

>    (_Hashtable<>::extract(const_iterator)): Use latter.

>    (_Hashtable<>::extract(const _Key&)): Likewise.

>    (_Hashtable<>::_M_merge_unique): Adapt.

>    (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Adapt.

>    (_Hashtable<>::_M_emplace<_Args>(const_iterator, false_type,

>    _Args&&...)): Adapt.

>

>Tested under Linux x86_64.

>

>Ok to commit ?

>

>François

>


>diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h

>index e2e3f016a35..307865b96bf 100644

>--- a/libstdc++-v3/include/bits/hashtable.h

>+++ b/libstdc++-v3/include/bits/hashtable.h

>@@ -197,6 +197,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>       using __hash_cached = typename __traits_type::__hash_cached;

>       using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>;

>       using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;

>+      using __node_sp_t = __detail::_NodeSmartPointer<__node_alloc_type>;

> 

>       using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>;

> 

>@@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>       __node_base*

>       _M_get_previous_node(size_type __bkt, __node_base* __n);

> 

>-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes

>-      // no element with its key already present). Take ownership of the node,

>-      // deallocate it on exception.

>+      // Insert node with key __k and hash code __code, in bucket __bkt if no

>+      // rehash (assumes no element with its key already present).

>+      template<typename _NodeAccessor>

> 	iterator

>-      _M_insert_unique_node(size_type __bkt, __hash_code __code,

>-			    __node_type* __n, size_type __n_elt = 1);

>+	_M_insert_unique_node(const key_type& __k, size_type __bkt,

>+			      __hash_code __code, const _NodeAccessor&,

>+			      size_type __n_elt = 1);

> 

>-      // Insert node with hash code __code. Take ownership of the node,

>-      // deallocate it on exception.

>+      // Insert node with hash code __code.

>+      template<typename _NodeAccessor>

> 	iterator

>-      _M_insert_multi_node(__node_type* __hint,

>-			   __hash_code __code, __node_type* __n);

>+	_M_insert_multi_node(__node_type* __hint, __hash_code __code,

>+			     const _NodeAccessor& __node_accessor);


It looks like most times you call these functions you pass an
identical lambda expression, but each of those lambda expressions will
create a unique type. That means you create different instantiations
of the function templates even though they do exactly the same thing.

That's just generating multiple copies of identical code. Passing in a
function object to provide the node pointer doesn't really seem
necessary anyway, so if it results in larger executables it's really
not desirable.

The attached patch still just passes in a node pointer (which the
function takes ownership of, unless it throws). Because the semantics
of _M_insert_multi_node change, we need the symbol name to change as
well (so old code linking to the old symbol doesn't find the new
definition with different semantics). Passing in the key makes it a
different symbol (in almost all cases the caller has the key available
already anyway).

An alternative design would be for _M_insert_(unique|multi)_node to
take a reference to the RAII type and set its _M_node member to null
after taking ownership of that pointer. That would be more explicit
about the ownership transfer, however it would require changes to the
_M_reinsert_node and _M_reinsert_node_multi functions which don't use
the RAII type (because the __node_type* is already owned by a node
handle).

> 

>       template<typename... _Args>

> 	std::pair<iterator, bool>

>@@ -805,9 +807,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

> 	      }

> 	    else

> 	      {

>-		__ret.position

>-		  = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);

>-		__nh._M_ptr = nullptr;

>+		__ret.position = _M_insert_unique_node(__k, __bkt, __code,

>+				[&__nh]()

>+				{ return std::exchange(__nh._M_ptr, nullptr); });


The existing code here can just be changed to pass in __k, but still
set __nh._M_ptr to null after the call returns.

i.e.

		__ret.position
		  = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);
		__nh._M_ptr = nullptr;


> 		__ret.inserted = true;

> 	      }

> 	  }

>@@ -818,33 +820,23 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>       iterator

>       _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)

>       {

>-	iterator __ret;

> 	if (__nh.empty())

>-	  __ret = end();

>-	else

>-	  {

>+	  return end();

>+

> 	__glibcxx_assert(get_allocator() == __nh.get_allocator());

> 

> 	auto __code = this->_M_hash_code(__nh._M_key());

>-	    auto __node = std::exchange(__nh._M_ptr, nullptr);

>-	    // FIXME: this deallocates the node on exception.

>-	    __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);

>-	  }

>-	return __ret;

>+	return _M_insert_multi_node(__hint._M_cur, __code,

>+			      [&__nh]()

>+			      { return std::exchange(__nh._M_ptr, nullptr); });


Now that the call won't deallocate anything, we can fix the FIXME by
just delaying resetting __nh._M_ptr until the call has returned:

        auto __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);
        __nh._M_ptr = nullptr;
        return __ret;


>       }

> 

>+    private:

>       /// Extract a node.


We don't want a Doxygen "///" comment on a private function, just "//"
is OK. In this case I don't think we need any comment, the
_M_extract_node name is clear enough. Saying "extract a node" adds
nothing.


>       node_type

>-      extract(const_iterator __pos)

>+      _M_extract_node(size_t __bkt, __node_base* __prev_n)

>       {

>-	__node_type* __n = __pos._M_cur;

>-	size_t __bkt = _M_bucket_index(__n);

>-

>-	// Look for previous node to unlink it from the erased one, this

>-	// is why we need buckets to contain the before begin to make

>-	// this search fast.

>-	__node_base* __prev_n = _M_get_previous_node(__bkt, __n);

>-

>+	__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);

> 	if (__prev_n == _M_buckets[__bkt])

> 	  _M_remove_bucket_begin(__bkt, __n->_M_next(),

> 	     __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);

>@@ -861,14 +853,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

> 	return { __n, this->_M_node_allocator() };

>       }

> 

>+    public:

>+      /// Extract a node.

>+      node_type

>+      extract(const_iterator __pos)

>+      {

>+	size_t __bkt = _M_bucket_index(__pos._M_cur);

>+	return _M_extract_node(__bkt, _M_get_previous_node(__bkt, __pos._M_cur));

>+      }

>+

>       /// Extract a node.

>       node_type

>       extract(const _Key& __k)

>       {

> 	node_type __nh;

>-	auto __pos = find(__k);

>-	if (__pos != end())

>-	  __nh = extract(const_iterator(__pos));

>+	__hash_code __code = this->_M_hash_code(__k);

>+	std::size_t __bkt = _M_bucket_index(__k, __code);

>+	if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, __code))

>+	  __nh = _M_extract_node(__bkt, __prev_node);

> 	return __nh;

>       }


Looks good.

>@@ -885,14 +887,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

> 	  for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)

> 	    {

> 	      auto __pos = __i++;

>-	      const key_type& __k = this->_M_extract()(__pos._M_cur->_M_v());

>+	      const key_type& __k = this->_M_extract()(*__pos);

> 	      __hash_code __code = this->_M_hash_code(__k);

> 	      size_type __bkt = _M_bucket_index(__k, __code);

> 	      if (_M_find_node(__bkt, __k, __code) == nullptr)

> 		{

> 		  auto __nh = __src.extract(__pos);

>-		  _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);

>-		  __nh._M_ptr = nullptr;

>+		  _M_insert_unique_node(__k, __bkt, __code,

>+				[&__nh]()

>+				{ return std::exchange(__nh._M_ptr, nullptr); },

>+				__n_elt);


Again, the old code seems fine. Just pass in __k.

> 		  __n_elt = 1;

> 		}

> 	      else if (__n_elt != 1)

>@@ -1634,31 +1638,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>       -> pair<iterator, bool>

>       {

> 	// First build the node to get access to the hash code

>-	__node_type* __node

>-	  = this->_M_allocate_node(std::forward<_Args>(__args)...);

>-	const key_type& __k = this->_M_extract()(__node->_M_v());

>-	__hash_code __code;

>-	__try

>-	  {

>-	    __code = this->_M_hash_code(__k);

>-	  }

>-	__catch(...)

>-	  {

>-	    this->_M_deallocate_node(__node);


Aside: It's confusing that _M_allocate_node actually allocates a node
and constructs an element, and _M_deallocate_node destroys the element
and deallocates the node (and we have _M_deallocate_node_ptr which
just does the deallocation part). We should add comments explaining
that.


>diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h

>index 878154ae210..ddcff0ec9d0 100644

>--- a/libstdc++-v3/include/bits/hashtable_policy.h

>+++ b/libstdc++-v3/include/bits/hashtable_policy.h

>@@ -170,6 +170,40 @@ namespace __detail

>       __hashtable_alloc& _M_h;

>     };

> 

>+  // Node smart pointer to benefit from RAII.

>+  template<typename _NodeAlloc>

>+    struct _NodeSmartPointer

>+    {

>+    private:

>+      using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;

>+      using __node_type = typename __hashtable_alloc::__node_type;

>+

>+    public:

>+      _NodeSmartPointer(__hashtable_alloc& __h, __node_type* __node) noexcept

>+      : _M_h(__h), _M_node(__node) { }

>+

>+      ~_NodeSmartPointer()

>+      {

>+	if (_M_node)

>+	  _M_h._M_deallocate_node(_M_node);

>+      }

>+

>+      __node_type*

>+      _M_get() const { return _M_node; }

>+

>+      __node_type*

>+      _M_release() noexcept

>+      {

>+	__node_type* __tmp = _M_node;

>+	_M_node = nullptr;

>+	return __tmp;

>+      }

>+

>+    private:

>+      __hashtable_alloc& _M_h;

>+      __node_type* _M_node;

>+    };


This can be defined as a member of _Hashtable, and can be much
simpler:

      // Simple RAII type for managing a node containing an element
      struct _Scoped_node
      {
	_Scoped_node(__hashtable_alloc* __h, __node_type* __n)
	: _M_h(__h), _M_node(__n) { }

	~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };

	_Scoped_node(const _Scoped_node&) = delete;
	_Scoped_node& operator=(const _Scoped_node&) = delete;

	__hashtable_alloc* _M_h;
	__node_type* _M_node;
      };

I've attached a patch implementing these suggested changes. I think
it's simpler this way, what do you think?

(This patch is generated with 'git diff -b' so whitespace changes
aren't shown, so this shouldn't be applied directly. I can send
another version with the whitespace changes that is suitable to
commit).
diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index e2e3f016a35..87a15c8d037 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -256,6 +256,21 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       using __reuse_or_alloc_node_gen_t =
 	__detail::_ReuseOrAllocNode<__node_alloc_type>;
 
+      // Simple RAII type for managing a node containing an element
+      struct _Scoped_node
+      {
+	_Scoped_node(__hashtable_alloc* __h, __node_type* __n)
+	: _M_h(__h), _M_node(__n) { }
+
+	~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };
+
+	_Scoped_node(const _Scoped_node&) = delete;
+	_Scoped_node& operator=(const _Scoped_node&) = delete;
+
+	__hashtable_alloc* _M_h;
+	__node_type* _M_node;
+      };
+
       // Metaprogramming for picking apart hash caching.
       template<typename _Cond>
 	using __if_hash_cached = __or_<__not_<__hash_cached>, _Cond>;
@@ -669,17 +684,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes
-      // no element with its key already present). Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node __n with key __k and hash code __code, in bucket __bkt
+      // if no rehash (assumes no element with same key already present).
+      // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			    __node_type* __n, size_type __n_elt = 1);
+      _M_insert_unique_node(const key_type& __k, size_type __bkt,
+			    __hash_code __code, __node_type* __n,
+			    size_type __n_elt = 1);
 
-      // Insert node with hash code __code. Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node __n with key __k and hash code __code.
+      // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_multi_node(__node_type* __hint,
+      _M_insert_multi_node(__node_type* __hint, const key_type& __k,
 			   __hash_code __code, __node_type* __n);
 
       template<typename... _Args>
@@ -806,7 +822,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	    else
 	      {
 		__ret.position
-		  = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);
+		  = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);
 		__nh._M_ptr = nullptr;
 		__ret.inserted = true;
 	      }
@@ -818,33 +834,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       iterator
       _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)
       {
-	iterator __ret;
 	if (__nh.empty())
-	  __ret = end();
-	else
-	  {
+	  return end();
+
 	__glibcxx_assert(get_allocator() == __nh.get_allocator());
 
-	    auto __code = this->_M_hash_code(__nh._M_key());
-	    auto __node = std::exchange(__nh._M_ptr, nullptr);
-	    // FIXME: this deallocates the node on exception.
-	    __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);
-	  }
+	const key_type& __k = __nh._M_key();
+	auto __code = this->_M_hash_code(__k);
+	auto __ret
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __nh._M_ptr);
+	__nh._M_ptr = nullptr;
 	return __ret;
       }
 
-      /// Extract a node.
+    private:
       node_type
-      extract(const_iterator __pos)
+      _M_extract_node(size_t __bkt, __node_base* __prev_n)
       {
-	__node_type* __n = __pos._M_cur;
-	size_t __bkt = _M_bucket_index(__n);
-
-	// Look for previous node to unlink it from the erased one, this
-	// is why we need buckets to contain the before begin to make
-	// this search fast.
-	__node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-
+	__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
 	if (__prev_n == _M_buckets[__bkt])
 	  _M_remove_bucket_begin(__bkt, __n->_M_next(),
 	     __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
@@ -861,14 +868,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	return { __n, this->_M_node_allocator() };
       }
 
+    public:
+      // Extract a node.
+      node_type
+      extract(const_iterator __pos)
+      {
+	size_t __bkt = _M_bucket_index(__pos._M_cur);
+	return _M_extract_node(__bkt,
+			       _M_get_previous_node(__bkt, __pos._M_cur));
+      }
+
       /// Extract a node.
       node_type
       extract(const _Key& __k)
       {
 	node_type __nh;
-	auto __pos = find(__k);
-	if (__pos != end())
-	  __nh = extract(const_iterator(__pos));
+	__hash_code __code = this->_M_hash_code(__k);
+	std::size_t __bkt = _M_bucket_index(__k, __code);
+	if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, __code))
+	  __nh = _M_extract_node(__bkt, __prev_node);
 	return __nh;
       }
 
@@ -885,13 +903,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)
 	    {
 	      auto __pos = __i++;
-	      const key_type& __k = this->_M_extract()(__pos._M_cur->_M_v());
+	      const key_type& __k = this->_M_extract()(*__pos);
 	      __hash_code __code = this->_M_hash_code(__k);
 	      size_type __bkt = _M_bucket_index(__k, __code);
 	      if (_M_find_node(__bkt, __k, __code) == nullptr)
 		{
 		  auto __nh = __src.extract(__pos);
-		  _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);
+		  _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr,
+					__n_elt);
 		  __nh._M_ptr = nullptr;
 		  __n_elt = 1;
 		}
@@ -1634,31 +1653,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> pair<iterator, bool>
       {
 	// First build the node to get access to the hash code
-	__node_type* __node
-	  = this->_M_allocate_node(std::forward<_Args>(__args)...);
-	const key_type& __k = this->_M_extract()(__node->_M_v());
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(__k);
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
-
+	_Scoped_node __node {
+	    this, this->_M_allocate_node(std::forward<_Args>(__args)...)
+	};
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
+	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
 	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  {
 	  // There is already an equivalent node, no insertion
-	    this->_M_deallocate_node(__node);
 	  return std::make_pair(iterator(__p), false);
-	  }
 
 	// Insert the node
-	return std::make_pair(_M_insert_unique_node(__bkt, __code, __node),
-			      true);
+	auto __pos = _M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return { __pos, true };
       }
 
   template<typename _Key, typename _Value,
@@ -1673,21 +1681,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> iterator
       {
 	// First build the node to get its hash code.
-	__node_type* __node =
-	  this->_M_allocate_node(std::forward<_Args>(__args)...);
-
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(this->_M_extract()(__node->_M_v()));
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
+	_Scoped_node __node {
+	    this, this->_M_allocate_node(std::forward<_Args>(__args)...)
+	};
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	__hash_code __code = this->_M_hash_code(__k);
+	auto __pos
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return __pos;
       }
 
   template<typename _Key, typename _Value,
@@ -1697,8 +1700,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			  __node_type* __node, size_type __n_elt)
+    _M_insert_unique_node(const key_type& __k, size_type __bkt,
+			  __hash_code __code, __node_type* __node,
+			  size_type __n_elt)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1706,13 +1710,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count,
 					  __n_elt);
 
-      __try
-	{
       if (__do_rehash.first)
 	{
 	  _M_rehash(__do_rehash.second, __saved_state);
-	      __bkt
-		= _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
+	  __bkt = _M_bucket_index(__k, __code);
 	}
 
       this->_M_store_code(__node, __code);
@@ -1722,15 +1723,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       ++_M_element_count;
       return iterator(__node);
     }
-      __catch(...)
-	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
-	}
-    }
 
-  // Insert node, in bucket bkt if no rehash (assumes no element with its key
-  // already present). Take ownership of the node, deallocate it on exception.
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
@@ -1738,21 +1731,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_multi_node(__node_type* __hint, __hash_code __code,
-			 __node_type* __node)
+    _M_insert_multi_node(__node_type* __hint, const key_type& __k,
+			 __hash_code __code, __node_type* __node)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
       std::pair<bool, std::size_t> __do_rehash
 	= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count, 1);
 
-      __try
-	{
       if (__do_rehash.first)
 	_M_rehash(__do_rehash.second, __saved_state);
 
       this->_M_store_code(__node, __code);
-	  const key_type& __k = this->_M_extract()(__node->_M_v());
       size_type __bkt = _M_bucket_index(__k, __code);
 
       // Find the node before an equivalent one or use hint if it exists and
@@ -1779,20 +1769,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	      }
 	}
       else
-	    // The inserted node has no equivalent in the
-	    // hashtable. We must insert the new node at the
-	    // beginning of the bucket to preserve equivalent
-	    // elements' relative positions.
+	// The inserted node has no equivalent in the hashtable. We must
+	// insert the new node at the beginning of the bucket to preserve
+	// equivalent elements' relative positions.
 	_M_insert_bucket_begin(__bkt, __node);
       ++_M_element_count;
       return iterator(__node);
     }
-      __catch(...)
-	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
-	}
-    }
 
   // Insert v if no element with its key is already present.
   template<typename _Key, typename _Value,
@@ -1811,12 +1794,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
 
-	__node_type* __n = _M_find_node(__bkt, __k, __code);
-	if (__n)
-	  return std::make_pair(iterator(__n), false);
+	if (__node_type* __node = _M_find_node(__bkt, __k, __code))
+	  return { iterator(__node), false };
 
-	__n = __node_gen(std::forward<_Arg>(__v));
-	return { _M_insert_unique_node(__bkt, __code, __n, __n_elt), true };
+	_Scoped_node __node{ this, __node_gen(std::forward<_Arg>(__v)) };
+	auto __pos
+	  = _M_insert_unique_node(__k, __bkt, __code, __node._M_node, __n_elt);
+	__node._M_node = nullptr;
+	return { __pos, true };
       }
 
   // Insert v unconditionally.
@@ -1837,9 +1822,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(this->_M_extract()(__v));
 
 	// Second allocate new node so that we don't rehash if it throws.
-	__node_type* __node = __node_gen(std::forward<_Arg>(__v));
-
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	_Scoped_node __node{ this, __node_gen(std::forward<_Arg>(__v)) };
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
+	auto __pos
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return __pos;
       }
 
   template<typename _Key, typename _Value,
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 878154ae210..7cc3f5c6d03 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -706,17 +706,19 @@ namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
+      typename __hashtable::_Scoped_node __node {
+	__h,
+	__h->_M_allocate_node(std::piecewise_construct,
 			      std::tuple<const key_type&>(__k),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
-
-      return __p->_M_v().second;
+			      std::tuple<>())
+      };
+      auto __pos
+	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+      __node._M_node = nullptr;
+      return __pos->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -731,17 +733,19 @@ namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
+      typename __hashtable::_Scoped_node __node {
+	__h,
+	__h->_M_allocate_node(std::piecewise_construct,
 			      std::forward_as_tuple(std::move(__k)),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
-
-      return __p->_M_v().second;
+			      std::tuple<>())
+      };
+      auto __pos
+	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+      __node._M_node = nullptr;
+      return __pos->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -1972,8 +1976,8 @@ namespace __detail
     }
 
   /**
-   * This type deals with all allocation and keeps an allocator instance through
-   * inheritance to benefit from EBO when possible.
+   * This type deals with all allocation and keeps an allocator instance
+   * through inheritance to benefit from EBO when possible.
    */
   template<typename _NodeAlloc>
     struct _Hashtable_alloc : private _Hashtable_ebo_helper<0, _NodeAlloc>
@@ -2012,17 +2016,21 @@ namespace __detail
       _M_node_allocator() const
       { return __ebo_node_alloc::_M_cget(); }
 
+      // Allocate a node and construct an element within it.
       template<typename... _Args>
 	__node_type*
 	_M_allocate_node(_Args&&... __args);
 
+      // Destroy the element within a node and deallocate the node.
       void
       _M_deallocate_node(__node_type* __n);
 
+      // Deallocate a node.
       void
       _M_deallocate_node_ptr(__node_type* __n);
 
-      // Deallocate the linked list of nodes pointed to by __n
+      // Deallocate the linked list of nodes pointed to by __n.
+      // The elements within the nodes are destroyed.
       void
       _M_deallocate_nodes(__node_type* __n);
Jonathan Wakely June 5, 2019, 4:43 p.m. | #2
On 05/06/19 17:22 +0100, Jonathan Wakely wrote:
>On 04/06/19 19:19 +0200, François Dumont wrote:

>>@@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>      __node_base*

>>      _M_get_previous_node(size_type __bkt, __node_base* __n);

>>

>>-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes

>>-      // no element with its key already present). Take ownership of the node,

>>-      // deallocate it on exception.

>>+      // Insert node with key __k and hash code __code, in bucket __bkt if no

>>+      // rehash (assumes no element with its key already present).

>>+      template<typename _NodeAccessor>

>>	iterator

>>-      _M_insert_unique_node(size_type __bkt, __hash_code __code,

>>-			    __node_type* __n, size_type __n_elt = 1);

>>+	_M_insert_unique_node(const key_type& __k, size_type __bkt,

>>+			      __hash_code __code, const _NodeAccessor&,

>>+			      size_type __n_elt = 1);

>>

>>-      // Insert node with hash code __code. Take ownership of the node,

>>-      // deallocate it on exception.

>>+      // Insert node with hash code __code.

>>+      template<typename _NodeAccessor>

>>	iterator

>>-      _M_insert_multi_node(__node_type* __hint,

>>-			   __hash_code __code, __node_type* __n);

>>+	_M_insert_multi_node(__node_type* __hint, __hash_code __code,

>>+			     const _NodeAccessor& __node_accessor);

>

>It looks like most times you call these functions you pass an

>identical lambda expression, but each of those lambda expressions will

>create a unique type. That means you create different instantiations

>of the function templates even though they do exactly the same thing.

>

>That's just generating multiple copies of identical code. Passing in a

>function object to provide the node pointer doesn't really seem

>necessary anyway, so if it results in larger executables it's really

>not desirable.


Also I didn't really like the name NodeAccessor. It's not an accessor,
because it performs ownership transfer. Invoking __node_accessor()
returns a __node_type* by releasing it from the previous owner (by
setting the owner's pointer member to null).

Passing a const reference to something called NodeAccessor does not
make it clear that it performs a mutating operation like that! If the
_M_insert_unique_node and _M_insert_multi_node functions did the
__node_accessor() call *before* rehashing, and rehashing threw an
exception, then they would leak. So it's important that the
__node_acessor() call happens at the right time, and so it's important
to name it well.

In my suggested patch the naming isn't misleading, because we just
pass a raw __node_type* and have a new comment saying:

      // Takes ownership of __n if insertion succeeds, throws otherwise.

The function doesn't have a callable with non-local effects that
modifies an object outside the function. Because the caller sets the
previous owner's pointer to null there's no danger of it happening at
the wrong time; it can only happen after the function has returned and
ownership transfer has completed.
Jonathan Wakely June 5, 2019, 7:18 p.m. | #3
On 05/06/19 17:43 +0100, Jonathan Wakely wrote:
>On 05/06/19 17:22 +0100, Jonathan Wakely wrote:

>>On 04/06/19 19:19 +0200, François Dumont wrote:

>>>@@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>     __node_base*

>>>     _M_get_previous_node(size_type __bkt, __node_base* __n);

>>>

>>>-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes

>>>-      // no element with its key already present). Take ownership of the node,

>>>-      // deallocate it on exception.

>>>+      // Insert node with key __k and hash code __code, in bucket __bkt if no

>>>+      // rehash (assumes no element with its key already present).

>>>+      template<typename _NodeAccessor>

>>>	iterator

>>>-      _M_insert_unique_node(size_type __bkt, __hash_code __code,

>>>-			    __node_type* __n, size_type __n_elt = 1);

>>>+	_M_insert_unique_node(const key_type& __k, size_type __bkt,

>>>+			      __hash_code __code, const _NodeAccessor&,

>>>+			      size_type __n_elt = 1);

>>>

>>>-      // Insert node with hash code __code. Take ownership of the node,

>>>-      // deallocate it on exception.

>>>+      // Insert node with hash code __code.

>>>+      template<typename _NodeAccessor>

>>>	iterator

>>>-      _M_insert_multi_node(__node_type* __hint,

>>>-			   __hash_code __code, __node_type* __n);

>>>+	_M_insert_multi_node(__node_type* __hint, __hash_code __code,

>>>+			     const _NodeAccessor& __node_accessor);

>>

>>It looks like most times you call these functions you pass an

>>identical lambda expression, but each of those lambda expressions will

>>create a unique type. That means you create different instantiations

>>of the function templates even though they do exactly the same thing.

>>

>>That's just generating multiple copies of identical code. Passing in a

>>function object to provide the node pointer doesn't really seem

>>necessary anyway, so if it results in larger executables it's really

>>not desirable.

>

>Also I didn't really like the name NodeAccessor. It's not an accessor,

>because it performs ownership transfer. Invoking __node_accessor()

>returns a __node_type* by releasing it from the previous owner (by

>setting the owner's pointer member to null).

>

>Passing a const reference to something called NodeAccessor does not

>make it clear that it performs a mutating operation like that! If the

>_M_insert_unique_node and _M_insert_multi_node functions did the

>__node_accessor() call *before* rehashing, and rehashing threw an

>exception, then they would leak. So it's important that the

>__node_acessor() call happens at the right time, and so it's important

>to name it well.

>

>In my suggested patch the naming isn't misleading, because we just

>pass a raw __node_type* and have a new comment saying:

>

>     // Takes ownership of __n if insertion succeeds, throws otherwise.

>

>The function doesn't have a callable with non-local effects that

>modifies an object outside the function. Because the caller sets the

>previous owner's pointer to null there's no danger of it happening at

>the wrong time; it can only happen after the function has returned and

>ownership transfer has completed.


As a further evolution that simplifies some uses of _Scoped_node we
could give it a constructor that allocates a node and constructs an
element, as in the attached patch.
commit cba81aa3c4b41bef140de30e9651c5134ee2f5ef
Author: Jonathan Wakely <jwakely@redhat.com>
Date:   Wed Jun 5 20:12:01 2019 +0100

    with _Scoped_node constructor that creates the node

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index 87a15c8d037..ab579a7059e 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -259,9 +259,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       // Simple RAII type for managing a node containing an element
       struct _Scoped_node
       {
-	_Scoped_node(__hashtable_alloc* __h, __node_type* __n)
+	// Take ownership of a node with a constructed element.
+	_Scoped_node(__node_type* __n, __hashtable_alloc* __h)
 	: _M_h(__h), _M_node(__n) { }
 
+	// Allocate a node and construct an element within it.
+	template<typename... _Args>
+	  _Scoped_node(__hashtable_alloc* __h, _Args&&... __args)
+	  : _M_h(__h),
+	    _M_node(__h->_M_allocate_node(std::forward<_Args>(__args)...))
+	  { }
+
+	// Destroy element and deallocate node.
 	~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };
 
 	_Scoped_node(const _Scoped_node&) = delete;
@@ -1653,9 +1662,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> pair<iterator, bool>
       {
 	// First build the node to get access to the hash code
-	_Scoped_node __node {
-	    this, this->_M_allocate_node(std::forward<_Args>(__args)...)
-	};
+	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
@@ -1681,9 +1688,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> iterator
       {
 	// First build the node to get its hash code.
-	_Scoped_node __node {
-	    this, this->_M_allocate_node(std::forward<_Args>(__args)...)
-	};
+	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 
 	__hash_code __code = this->_M_hash_code(__k);
@@ -1797,7 +1802,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	if (__node_type* __node = _M_find_node(__bkt, __k, __code))
 	  return { iterator(__node), false };
 
-	_Scoped_node __node{ this, __node_gen(std::forward<_Arg>(__v)) };
+	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 	auto __pos
 	  = _M_insert_unique_node(__k, __bkt, __code, __node._M_node, __n_elt);
 	__node._M_node = nullptr;
@@ -1822,7 +1827,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(this->_M_extract()(__v));
 
 	// Second allocate new node so that we don't rehash if it throws.
-	_Scoped_node __node{ this, __node_gen(std::forward<_Arg>(__v)) };
+	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 	auto __pos
 	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 7cc3f5c6d03..f5809c7443a 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -711,9 +711,9 @@ namespace __detail
 
       typename __hashtable::_Scoped_node __node {
 	__h,
-	__h->_M_allocate_node(std::piecewise_construct,
-			      std::tuple<const key_type&>(__k),
-			      std::tuple<>())
+	std::piecewise_construct,
+	std::tuple<const key_type&>(__k),
+	std::tuple<>()
       };
       auto __pos
 	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
@@ -738,9 +738,9 @@ namespace __detail
 
       typename __hashtable::_Scoped_node __node {
 	__h,
-	__h->_M_allocate_node(std::piecewise_construct,
-			      std::forward_as_tuple(std::move(__k)),
-			      std::tuple<>())
+	std::piecewise_construct,
+	std::forward_as_tuple(std::move(__k)),
+	std::tuple<>()
       };
       auto __pos
 	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
Jonathan Wakely June 6, 2019, 11:41 a.m. | #4
On 05/06/19 20:18 +0100, Jonathan Wakely wrote:
>On 05/06/19 17:43 +0100, Jonathan Wakely wrote:

>>On 05/06/19 17:22 +0100, Jonathan Wakely wrote:

>>>On 04/06/19 19:19 +0200, François Dumont wrote:

>>>>@@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>    __node_base*

>>>>    _M_get_previous_node(size_type __bkt, __node_base* __n);

>>>>

>>>>-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes

>>>>-      // no element with its key already present). Take ownership of the node,

>>>>-      // deallocate it on exception.

>>>>+      // Insert node with key __k and hash code __code, in bucket __bkt if no

>>>>+      // rehash (assumes no element with its key already present).

>>>>+      template<typename _NodeAccessor>

>>>>	iterator

>>>>-      _M_insert_unique_node(size_type __bkt, __hash_code __code,

>>>>-			    __node_type* __n, size_type __n_elt = 1);

>>>>+	_M_insert_unique_node(const key_type& __k, size_type __bkt,

>>>>+			      __hash_code __code, const _NodeAccessor&,

>>>>+			      size_type __n_elt = 1);

>>>>

>>>>-      // Insert node with hash code __code. Take ownership of the node,

>>>>-      // deallocate it on exception.

>>>>+      // Insert node with hash code __code.

>>>>+      template<typename _NodeAccessor>

>>>>	iterator

>>>>-      _M_insert_multi_node(__node_type* __hint,

>>>>-			   __hash_code __code, __node_type* __n);

>>>>+	_M_insert_multi_node(__node_type* __hint, __hash_code __code,

>>>>+			     const _NodeAccessor& __node_accessor);

>>>

>>>It looks like most times you call these functions you pass an

>>>identical lambda expression, but each of those lambda expressions will

>>>create a unique type. That means you create different instantiations

>>>of the function templates even though they do exactly the same thing.

>>>

>>>That's just generating multiple copies of identical code. Passing in a

>>>function object to provide the node pointer doesn't really seem

>>>necessary anyway, so if it results in larger executables it's really

>>>not desirable.

>>

>>Also I didn't really like the name NodeAccessor. It's not an accessor,

>>because it performs ownership transfer. Invoking __node_accessor()

>>returns a __node_type* by releasing it from the previous owner (by

>>setting the owner's pointer member to null).

>>

>>Passing a const reference to something called NodeAccessor does not

>>make it clear that it performs a mutating operation like that! If the

>>_M_insert_unique_node and _M_insert_multi_node functions did the

>>__node_accessor() call *before* rehashing, and rehashing threw an

>>exception, then they would leak. So it's important that the

>>__node_acessor() call happens at the right time, and so it's important

>>to name it well.

>>

>>In my suggested patch the naming isn't misleading, because we just

>>pass a raw __node_type* and have a new comment saying:

>>

>>    // Takes ownership of __n if insertion succeeds, throws otherwise.

>>

>>The function doesn't have a callable with non-local effects that

>>modifies an object outside the function. Because the caller sets the

>>previous owner's pointer to null there's no danger of it happening at

>>the wrong time; it can only happen after the function has returned and

>>ownership transfer has completed.

>

>As a further evolution that simplifies some uses of _Scoped_node we

>could give it a constructor that allocates a node and constructs an

>element, as in the attached patch.


Of course all this code is completely wrong, because it uses raw
pointers not the allocator's pointer type. But that's a much bigger
problem that needs to be solved separately.
François Dumont June 7, 2019, 4:39 p.m. | #5
On 6/5/19 6:22 PM, Jonathan Wakely wrote:
> On 04/06/19 19:19 +0200, François Dumont wrote:

>> Hi

>>

>>     Here is a patch to enhance the _Hashtable extract node API and 

>> fix a FIXME request.

>>

>>     The enhancement to the extract node Api is that extract(const 

>> key_type&) do not call extract(const_iterator) anymore. Doing so we 

>> had to loop again through bucket nodes to find the previous node to 

>> the one to extract. Even if a bucket shall not contain many nodes (in 

>> unique key mode) it's easy to avoid it.

>

> Nice.

>

>>     To fix the FIXME I introduced a node smart pointer type managing 

>> the node lifetime. The node is extracted from this smart pointer only 

>> when there can't be any exception raised. In the context of the node 

>> extract api the node handle is considered as a smart pointer. So the 

>> node handle will remain owner of the node in case of exception when 

>> reinserting it, I hope it is the expected behavior.

>

> Yes, that's right.

>

> I was going to suggest just using the node handle type instead of

> inventing a new smart pointer, but the handle type uses std::optional

> so isn't available for C++11/14.


I considered it too, or even use shared_ptr but thought it was overkill.

>

>

>>     * include/bits/hashtable_policy.h

>>     (struct _NodeSmartPointer<_NodeAlloc>): New.

>>     (_Map_base<>::operator[](const key_type&)): Use latter, adapt.

>>     (_Map_base<>::operator[](key_type&&)): Likewise.

>>     * include/bits/hashtable.h

>>     (_Hashtable<>::__node_sp_t): New.

>>     (_Hashtable<>::_M_insert_unique_node(size_type, __hash_code,

>>     __node_type*, size_type)): Replace by...

>> (_Hashtable<>::_M_insert_unique_node<_NodeAccessor>(const key_type&,

>>     size_type, __hash_code, const _NodeAccessor&, size_type)): ...that.

>>     (_Hashtable<>::_M_insert_multi_node(__node_type*, __hash_code,

>>     __node_type*)): Replace by...

>> (_Hashtable<>::_M_insert_multi_node<_NodeAccessor>(__node_type*,

>>     __hash_code, const _NodeAccessor&)): ...that.

>>     (_Hashtable<>::_M_reinsert_node): Adapt.

>>     (_Hashtable<>::_M_reinsert_node_multi): Adapt.

>>     (_Hashtable<>::_M_extract_node(size_t, __node_base*)): New.

>>     (_Hashtable<>::extract(const_iterator)): Use latter.

>>     (_Hashtable<>::extract(const _Key&)): Likewise.

>>     (_Hashtable<>::_M_merge_unique): Adapt.

>>     (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Adapt.

>>     (_Hashtable<>::_M_emplace<_Args>(const_iterator, false_type,

>>     _Args&&...)): Adapt.

>>

>> Tested under Linux x86_64.

>>

>> Ok to commit ?

>>

>> François

>>

>

>> diff --git a/libstdc++-v3/include/bits/hashtable.h 

>> b/libstdc++-v3/include/bits/hashtable.h

>> index e2e3f016a35..307865b96bf 100644

>> --- a/libstdc++-v3/include/bits/hashtable.h

>> +++ b/libstdc++-v3/include/bits/hashtable.h

>> @@ -197,6 +197,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>       using __hash_cached = typename __traits_type::__hash_cached;

>>       using __node_type = __detail::_Hash_node<_Value, 

>> __hash_cached::value>;

>>       using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;

>> +      using __node_sp_t = 

>> __detail::_NodeSmartPointer<__node_alloc_type>;

>>

>>       using __hashtable_alloc = 

>> __detail::_Hashtable_alloc<__node_alloc_type>;

>>

>> @@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>       __node_base*

>>       _M_get_previous_node(size_type __bkt, __node_base* __n);

>>

>> -      // Insert node with hash code __code, in bucket bkt if no 

>> rehash (assumes

>> -      // no element with its key already present). Take ownership of 

>> the node,

>> -      // deallocate it on exception.

>> +      // Insert node with key __k and hash code __code, in bucket 

>> __bkt if no

>> +      // rehash (assumes no element with its key already present).

>> +      template<typename _NodeAccessor>

>>     iterator

>> -      _M_insert_unique_node(size_type __bkt, __hash_code __code,

>> -                __node_type* __n, size_type __n_elt = 1);

>> +    _M_insert_unique_node(const key_type& __k, size_type __bkt,

>> +                  __hash_code __code, const _NodeAccessor&,

>> +                  size_type __n_elt = 1);

>>

>> -      // Insert node with hash code __code. Take ownership of the node,

>> -      // deallocate it on exception.

>> +      // Insert node with hash code __code.

>> +      template<typename _NodeAccessor>

>>     iterator

>> -      _M_insert_multi_node(__node_type* __hint,

>> -               __hash_code __code, __node_type* __n);

>> +    _M_insert_multi_node(__node_type* __hint, __hash_code __code,

>> +                 const _NodeAccessor& __node_accessor);

>

> It looks like most times you call these functions you pass an

> identical lambda expression, but each of those lambda expressions will

> create a unique type. That means you create different instantiations

> of the function templates even though they do exactly the same thing.

>

> That's just generating multiple copies of identical code. Passing in a

> function object to provide the node pointer doesn't really seem

> necessary anyway, so if it results in larger executables it's really

> not desirable.



Maybe one the reasons we have a difference when running performance 
tests in -03. I don't know why I abused so much of lambdas when I see 
your proposal.


>

> The attached patch still just passes in a node pointer (which the

> function takes ownership of, unless it throws). Because the semantics

> of _M_insert_multi_node change, we need the symbol name to change as

> well (so old code linking to the old symbol doesn't find the new

> definition with different semantics). Passing in the key makes it a

> different symbol (in almost all cases the caller has the key available

> already anyway).

Very nice.
>

> An alternative design would be for _M_insert_(unique|multi)_node to

> take a reference to the RAII type and set its _M_node member to null

> after taking ownership of that pointer. That would be more explicit

> about the ownership transfer, however it would require changes to the

> _M_reinsert_node and _M_reinsert_node_multi functions which don't use

> the RAII type (because the __node_type* is already owned by a node

> handle).

>

>>

>>       template<typename... _Args>

>>     std::pair<iterator, bool>

>> @@ -805,9 +807,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>           }

>>         else

>>           {

>> -        __ret.position

>> -          = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);

>> -        __nh._M_ptr = nullptr;

>> +        __ret.position = _M_insert_unique_node(__k, __bkt, __code,

>> +                [&__nh]()

>> +                { return std::exchange(__nh._M_ptr, nullptr); });

>

> The existing code here can just be changed to pass in __k, but still

> set __nh._M_ptr to null after the call returns.

>

> i.e.

>

>         __ret.position

>           = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);

>         __nh._M_ptr = nullptr;

>

>

>>         __ret.inserted = true;

>>           }

>>       }

>> @@ -818,33 +820,23 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>       iterator

>>       _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)

>>       {

>> -    iterator __ret;

>>     if (__nh.empty())

>> -      __ret = end();

>> -    else

>> -      {

>> +      return end();

>> +

>>     __glibcxx_assert(get_allocator() == __nh.get_allocator());

>>

>>     auto __code = this->_M_hash_code(__nh._M_key());

>> -        auto __node = std::exchange(__nh._M_ptr, nullptr);

>> -        // FIXME: this deallocates the node on exception.

>> -        __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);

>> -      }

>> -    return __ret;

>> +    return _M_insert_multi_node(__hint._M_cur, __code,

>> +                  [&__nh]()

>> +                  { return std::exchange(__nh._M_ptr, nullptr); });

>

> Now that the call won't deallocate anything, we can fix the FIXME by

> just delaying resetting __nh._M_ptr until the call has returned:

>

>        auto __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);

>        __nh._M_ptr = nullptr;

>        return __ret;

>

>

>>       }

>>

>> +    private:

>>       /// Extract a node.

>

> We don't want a Doxygen "///" comment on a private function, just "//"

> is OK. In this case I don't think we need any comment, the

> _M_extract_node name is clear enough. Saying "extract a node" adds

> nothing.

>

>

>>       node_type

>> -      extract(const_iterator __pos)

>> +      _M_extract_node(size_t __bkt, __node_base* __prev_n)

>>       {

>> -    __node_type* __n = __pos._M_cur;

>> -    size_t __bkt = _M_bucket_index(__n);

>> -

>> -    // Look for previous node to unlink it from the erased one, this

>> -    // is why we need buckets to contain the before begin to make

>> -    // this search fast.

>> -    __node_base* __prev_n = _M_get_previous_node(__bkt, __n);

>> -

>> +    __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);

>>     if (__prev_n == _M_buckets[__bkt])

>>       _M_remove_bucket_begin(__bkt, __n->_M_next(),

>>          __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);

>> @@ -861,14 +853,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>     return { __n, this->_M_node_allocator() };

>>       }

>>

>> +    public:

>> +      /// Extract a node.

>> +      node_type

>> +      extract(const_iterator __pos)

>> +      {

>> +    size_t __bkt = _M_bucket_index(__pos._M_cur);

>> +    return _M_extract_node(__bkt, _M_get_previous_node(__bkt, 

>> __pos._M_cur));

>> +      }

>> +

>>       /// Extract a node.

>>       node_type

>>       extract(const _Key& __k)

>>       {

>>     node_type __nh;

>> -    auto __pos = find(__k);

>> -    if (__pos != end())

>> -      __nh = extract(const_iterator(__pos));

>> +    __hash_code __code = this->_M_hash_code(__k);

>> +    std::size_t __bkt = _M_bucket_index(__k, __code);

>> +    if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, 

>> __code))

>> +      __nh = _M_extract_node(__bkt, __prev_node);

>>     return __nh;

>>       }

>

> Looks good.

>

>> @@ -885,14 +887,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>       for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)

>>         {

>>           auto __pos = __i++;

>> -          const key_type& __k = 

>> this->_M_extract()(__pos._M_cur->_M_v());

>> +          const key_type& __k = this->_M_extract()(*__pos);

>>           __hash_code __code = this->_M_hash_code(__k);

>>           size_type __bkt = _M_bucket_index(__k, __code);

>>           if (_M_find_node(__bkt, __k, __code) == nullptr)

>>         {

>>           auto __nh = __src.extract(__pos);

>> -          _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);

>> -          __nh._M_ptr = nullptr;

>> +          _M_insert_unique_node(__k, __bkt, __code,

>> +                [&__nh]()

>> +                { return std::exchange(__nh._M_ptr, nullptr); },

>> +                __n_elt);

>

> Again, the old code seems fine. Just pass in __k.

>

>>           __n_elt = 1;

>>         }

>>           else if (__n_elt != 1)

>> @@ -1634,31 +1638,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>       -> pair<iterator, bool>

>>       {

>>     // First build the node to get access to the hash code

>> -    __node_type* __node

>> -      = this->_M_allocate_node(std::forward<_Args>(__args)...);

>> -    const key_type& __k = this->_M_extract()(__node->_M_v());

>> -    __hash_code __code;

>> -    __try

>> -      {

>> -        __code = this->_M_hash_code(__k);

>> -      }

>> -    __catch(...)

>> -      {

>> -        this->_M_deallocate_node(__node);

>

> Aside: It's confusing that _M_allocate_node actually allocates a node

> and constructs an element, and _M_deallocate_node destroys the element

> and deallocates the node (and we have _M_deallocate_node_ptr which

> just does the deallocation part). We should add comments explaining

> that.

>

>

>> diff --git a/libstdc++-v3/include/bits/hashtable_policy.h 

>> b/libstdc++-v3/include/bits/hashtable_policy.h

>> index 878154ae210..ddcff0ec9d0 100644

>> --- a/libstdc++-v3/include/bits/hashtable_policy.h

>> +++ b/libstdc++-v3/include/bits/hashtable_policy.h

>> @@ -170,6 +170,40 @@ namespace __detail

>>       __hashtable_alloc& _M_h;

>>     };

>>

>> +  // Node smart pointer to benefit from RAII.

>> +  template<typename _NodeAlloc>

>> +    struct _NodeSmartPointer

>> +    {

>> +    private:

>> +      using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;

>> +      using __node_type = typename __hashtable_alloc::__node_type;

>> +

>> +    public:

>> +      _NodeSmartPointer(__hashtable_alloc& __h, __node_type* __node) 

>> noexcept

>> +      : _M_h(__h), _M_node(__node) { }

>> +

>> +      ~_NodeSmartPointer()

>> +      {

>> +    if (_M_node)

>> +      _M_h._M_deallocate_node(_M_node);

>> +      }

>> +

>> +      __node_type*

>> +      _M_get() const { return _M_node; }

>> +

>> +      __node_type*

>> +      _M_release() noexcept

>> +      {

>> +    __node_type* __tmp = _M_node;

>> +    _M_node = nullptr;

>> +    return __tmp;

>> +      }

>> +

>> +    private:

>> +      __hashtable_alloc& _M_h;

>> +      __node_type* _M_node;

>> +    };

>

> This can be defined as a member of _Hashtable, and can be much

> simpler:

>

>      // Simple RAII type for managing a node containing an element

>      struct _Scoped_node

>      {

>     _Scoped_node(__hashtable_alloc* __h, __node_type* __n)

>     : _M_h(__h), _M_node(__n) { }

>

>     ~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };

>

>     _Scoped_node(const _Scoped_node&) = delete;

>     _Scoped_node& operator=(const _Scoped_node&) = delete;

>

>     __hashtable_alloc* _M_h;

>     __node_type* _M_node;

>      };

>

> I've attached a patch implementing these suggested changes. I think

> it's simpler this way, what do you think?


I think it is much better than what I proposed.


>

> (This patch is generated with 'git diff -b' so whitespace changes

> aren't shown, so this shouldn't be applied directly. I can send

> another version with the whitespace changes that is suitable to

> commit).

>

Mine was generated with 'git diff -w' for the same reason.

You can send the full patch to me if you prefer but feel free to commit 
it yourself.

François
Jonathan Wakely June 17, 2019, 10:28 a.m. | #6
On 07/06/19 18:39 +0200, François Dumont wrote:
>On 6/5/19 6:22 PM, Jonathan Wakely wrote:

>>On 04/06/19 19:19 +0200, François Dumont wrote:

>>>Hi

>>>

>>>    Here is a patch to enhance the _Hashtable extract node API and 

>>>fix a FIXME request.

>>>

>>>    The enhancement to the extract node Api is that extract(const 

>>>key_type&) do not call extract(const_iterator) anymore. Doing so 

>>>we had to loop again through bucket nodes to find the previous 

>>>node to the one to extract. Even if a bucket shall not contain 

>>>many nodes (in unique key mode) it's easy to avoid it.

>>

>>Nice.

>>

>>>    To fix the FIXME I introduced a node smart pointer type 

>>>managing the node lifetime. The node is extracted from this smart 

>>>pointer only when there can't be any exception raised. In the 

>>>context of the node extract api the node handle is considered as a 

>>>smart pointer. So the node handle will remain owner of the node in 

>>>case of exception when reinserting it, I hope it is the expected 

>>>behavior.

>>

>>Yes, that's right.

>>

>>I was going to suggest just using the node handle type instead of

>>inventing a new smart pointer, but the handle type uses std::optional

>>so isn't available for C++11/14.

>

>I considered it too, or even use shared_ptr but thought it was overkill.

>

>>

>>

>>>    * include/bits/hashtable_policy.h

>>>    (struct _NodeSmartPointer<_NodeAlloc>): New.

>>>    (_Map_base<>::operator[](const key_type&)): Use latter, adapt.

>>>    (_Map_base<>::operator[](key_type&&)): Likewise.

>>>    * include/bits/hashtable.h

>>>    (_Hashtable<>::__node_sp_t): New.

>>>    (_Hashtable<>::_M_insert_unique_node(size_type, __hash_code,

>>>    __node_type*, size_type)): Replace by...

>>>(_Hashtable<>::_M_insert_unique_node<_NodeAccessor>(const key_type&,

>>>    size_type, __hash_code, const _NodeAccessor&, size_type)): ...that.

>>>    (_Hashtable<>::_M_insert_multi_node(__node_type*, __hash_code,

>>>    __node_type*)): Replace by...

>>>(_Hashtable<>::_M_insert_multi_node<_NodeAccessor>(__node_type*,

>>>    __hash_code, const _NodeAccessor&)): ...that.

>>>    (_Hashtable<>::_M_reinsert_node): Adapt.

>>>    (_Hashtable<>::_M_reinsert_node_multi): Adapt.

>>>    (_Hashtable<>::_M_extract_node(size_t, __node_base*)): New.

>>>    (_Hashtable<>::extract(const_iterator)): Use latter.

>>>    (_Hashtable<>::extract(const _Key&)): Likewise.

>>>    (_Hashtable<>::_M_merge_unique): Adapt.

>>>    (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Adapt.

>>>    (_Hashtable<>::_M_emplace<_Args>(const_iterator, false_type,

>>>    _Args&&...)): Adapt.

>>>

>>>Tested under Linux x86_64.

>>>

>>>Ok to commit ?

>>>

>>>François

>>>

>>

>>>diff --git a/libstdc++-v3/include/bits/hashtable.h 

>>>b/libstdc++-v3/include/bits/hashtable.h

>>>index e2e3f016a35..307865b96bf 100644

>>>--- a/libstdc++-v3/include/bits/hashtable.h

>>>+++ b/libstdc++-v3/include/bits/hashtable.h

>>>@@ -197,6 +197,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>      using __hash_cached = typename __traits_type::__hash_cached;

>>>      using __node_type = __detail::_Hash_node<_Value, 

>>>__hash_cached::value>;

>>>      using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;

>>>+      using __node_sp_t = 

>>>__detail::_NodeSmartPointer<__node_alloc_type>;

>>>

>>>      using __hashtable_alloc = 

>>>__detail::_Hashtable_alloc<__node_alloc_type>;

>>>

>>>@@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>      __node_base*

>>>      _M_get_previous_node(size_type __bkt, __node_base* __n);

>>>

>>>-      // Insert node with hash code __code, in bucket bkt if no 

>>>rehash (assumes

>>>-      // no element with its key already present). Take ownership 

>>>of the node,

>>>-      // deallocate it on exception.

>>>+      // Insert node with key __k and hash code __code, in bucket 

>>>__bkt if no

>>>+      // rehash (assumes no element with its key already present).

>>>+      template<typename _NodeAccessor>

>>>    iterator

>>>-      _M_insert_unique_node(size_type __bkt, __hash_code __code,

>>>-                __node_type* __n, size_type __n_elt = 1);

>>>+    _M_insert_unique_node(const key_type& __k, size_type __bkt,

>>>+                  __hash_code __code, const _NodeAccessor&,

>>>+                  size_type __n_elt = 1);

>>>

>>>-      // Insert node with hash code __code. Take ownership of the node,

>>>-      // deallocate it on exception.

>>>+      // Insert node with hash code __code.

>>>+      template<typename _NodeAccessor>

>>>    iterator

>>>-      _M_insert_multi_node(__node_type* __hint,

>>>-               __hash_code __code, __node_type* __n);

>>>+    _M_insert_multi_node(__node_type* __hint, __hash_code __code,

>>>+                 const _NodeAccessor& __node_accessor);

>>

>>It looks like most times you call these functions you pass an

>>identical lambda expression, but each of those lambda expressions will

>>create a unique type. That means you create different instantiations

>>of the function templates even though they do exactly the same thing.

>>

>>That's just generating multiple copies of identical code. Passing in a

>>function object to provide the node pointer doesn't really seem

>>necessary anyway, so if it results in larger executables it's really

>>not desirable.

>

>

>Maybe one the reasons we have a difference when running performance 

>tests in -03. I don't know why I abused so much of lambdas when I see 

>your proposal.

>

>

>>

>>The attached patch still just passes in a node pointer (which the

>>function takes ownership of, unless it throws). Because the semantics

>>of _M_insert_multi_node change, we need the symbol name to change as

>>well (so old code linking to the old symbol doesn't find the new

>>definition with different semantics). Passing in the key makes it a

>>different symbol (in almost all cases the caller has the key available

>>already anyway).

>Very nice.

>>

>>An alternative design would be for _M_insert_(unique|multi)_node to

>>take a reference to the RAII type and set its _M_node member to null

>>after taking ownership of that pointer. That would be more explicit

>>about the ownership transfer, however it would require changes to the

>>_M_reinsert_node and _M_reinsert_node_multi functions which don't use

>>the RAII type (because the __node_type* is already owned by a node

>>handle).

>>

>>>

>>>      template<typename... _Args>

>>>    std::pair<iterator, bool>

>>>@@ -805,9 +807,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>          }

>>>        else

>>>          {

>>>-        __ret.position

>>>-          = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);

>>>-        __nh._M_ptr = nullptr;

>>>+        __ret.position = _M_insert_unique_node(__k, __bkt, __code,

>>>+                [&__nh]()

>>>+                { return std::exchange(__nh._M_ptr, nullptr); });

>>

>>The existing code here can just be changed to pass in __k, but still

>>set __nh._M_ptr to null after the call returns.

>>

>>i.e.

>>

>>        __ret.position

>>          = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);

>>        __nh._M_ptr = nullptr;

>>

>>

>>>        __ret.inserted = true;

>>>          }

>>>      }

>>>@@ -818,33 +820,23 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>      iterator

>>>      _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)

>>>      {

>>>-    iterator __ret;

>>>    if (__nh.empty())

>>>-      __ret = end();

>>>-    else

>>>-      {

>>>+      return end();

>>>+

>>>    __glibcxx_assert(get_allocator() == __nh.get_allocator());

>>>

>>>    auto __code = this->_M_hash_code(__nh._M_key());

>>>-        auto __node = std::exchange(__nh._M_ptr, nullptr);

>>>-        // FIXME: this deallocates the node on exception.

>>>-        __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);

>>>-      }

>>>-    return __ret;

>>>+    return _M_insert_multi_node(__hint._M_cur, __code,

>>>+                  [&__nh]()

>>>+                  { return std::exchange(__nh._M_ptr, nullptr); });

>>

>>Now that the call won't deallocate anything, we can fix the FIXME by

>>just delaying resetting __nh._M_ptr until the call has returned:

>>

>>       auto __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);

>>       __nh._M_ptr = nullptr;

>>       return __ret;

>>

>>

>>>      }

>>>

>>>+    private:

>>>      /// Extract a node.

>>

>>We don't want a Doxygen "///" comment on a private function, just "//"

>>is OK. In this case I don't think we need any comment, the

>>_M_extract_node name is clear enough. Saying "extract a node" adds

>>nothing.

>>

>>

>>>      node_type

>>>-      extract(const_iterator __pos)

>>>+      _M_extract_node(size_t __bkt, __node_base* __prev_n)

>>>      {

>>>-    __node_type* __n = __pos._M_cur;

>>>-    size_t __bkt = _M_bucket_index(__n);

>>>-

>>>-    // Look for previous node to unlink it from the erased one, this

>>>-    // is why we need buckets to contain the before begin to make

>>>-    // this search fast.

>>>-    __node_base* __prev_n = _M_get_previous_node(__bkt, __n);

>>>-

>>>+    __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);

>>>    if (__prev_n == _M_buckets[__bkt])

>>>      _M_remove_bucket_begin(__bkt, __n->_M_next(),

>>>         __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);

>>>@@ -861,14 +853,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>    return { __n, this->_M_node_allocator() };

>>>      }

>>>

>>>+    public:

>>>+      /// Extract a node.

>>>+      node_type

>>>+      extract(const_iterator __pos)

>>>+      {

>>>+    size_t __bkt = _M_bucket_index(__pos._M_cur);

>>>+    return _M_extract_node(__bkt, _M_get_previous_node(__bkt, 

>>>__pos._M_cur));

>>>+      }

>>>+

>>>      /// Extract a node.

>>>      node_type

>>>      extract(const _Key& __k)

>>>      {

>>>    node_type __nh;

>>>-    auto __pos = find(__k);

>>>-    if (__pos != end())

>>>-      __nh = extract(const_iterator(__pos));

>>>+    __hash_code __code = this->_M_hash_code(__k);

>>>+    std::size_t __bkt = _M_bucket_index(__k, __code);

>>>+    if (__node_base* __prev_node = _M_find_before_node(__bkt, 

>>>__k, __code))

>>>+      __nh = _M_extract_node(__bkt, __prev_node);

>>>    return __nh;

>>>      }

>>

>>Looks good.

>>

>>>@@ -885,14 +887,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>      for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)

>>>        {

>>>          auto __pos = __i++;

>>>-          const key_type& __k = 

>>>this->_M_extract()(__pos._M_cur->_M_v());

>>>+          const key_type& __k = this->_M_extract()(*__pos);

>>>          __hash_code __code = this->_M_hash_code(__k);

>>>          size_type __bkt = _M_bucket_index(__k, __code);

>>>          if (_M_find_node(__bkt, __k, __code) == nullptr)

>>>        {

>>>          auto __nh = __src.extract(__pos);

>>>-          _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);

>>>-          __nh._M_ptr = nullptr;

>>>+          _M_insert_unique_node(__k, __bkt, __code,

>>>+                [&__nh]()

>>>+                { return std::exchange(__nh._M_ptr, nullptr); },

>>>+                __n_elt);

>>

>>Again, the old code seems fine. Just pass in __k.

>>

>>>          __n_elt = 1;

>>>        }

>>>          else if (__n_elt != 1)

>>>@@ -1634,31 +1638,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>      -> pair<iterator, bool>

>>>      {

>>>    // First build the node to get access to the hash code

>>>-    __node_type* __node

>>>-      = this->_M_allocate_node(std::forward<_Args>(__args)...);

>>>-    const key_type& __k = this->_M_extract()(__node->_M_v());

>>>-    __hash_code __code;

>>>-    __try

>>>-      {

>>>-        __code = this->_M_hash_code(__k);

>>>-      }

>>>-    __catch(...)

>>>-      {

>>>-        this->_M_deallocate_node(__node);

>>

>>Aside: It's confusing that _M_allocate_node actually allocates a node

>>and constructs an element, and _M_deallocate_node destroys the element

>>and deallocates the node (and we have _M_deallocate_node_ptr which

>>just does the deallocation part). We should add comments explaining

>>that.

>>

>>

>>>diff --git a/libstdc++-v3/include/bits/hashtable_policy.h 

>>>b/libstdc++-v3/include/bits/hashtable_policy.h

>>>index 878154ae210..ddcff0ec9d0 100644

>>>--- a/libstdc++-v3/include/bits/hashtable_policy.h

>>>+++ b/libstdc++-v3/include/bits/hashtable_policy.h

>>>@@ -170,6 +170,40 @@ namespace __detail

>>>      __hashtable_alloc& _M_h;

>>>    };

>>>

>>>+  // Node smart pointer to benefit from RAII.

>>>+  template<typename _NodeAlloc>

>>>+    struct _NodeSmartPointer

>>>+    {

>>>+    private:

>>>+      using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;

>>>+      using __node_type = typename __hashtable_alloc::__node_type;

>>>+

>>>+    public:

>>>+      _NodeSmartPointer(__hashtable_alloc& __h, __node_type* 

>>>__node) noexcept

>>>+      : _M_h(__h), _M_node(__node) { }

>>>+

>>>+      ~_NodeSmartPointer()

>>>+      {

>>>+    if (_M_node)

>>>+      _M_h._M_deallocate_node(_M_node);

>>>+      }

>>>+

>>>+      __node_type*

>>>+      _M_get() const { return _M_node; }

>>>+

>>>+      __node_type*

>>>+      _M_release() noexcept

>>>+      {

>>>+    __node_type* __tmp = _M_node;

>>>+    _M_node = nullptr;

>>>+    return __tmp;

>>>+      }

>>>+

>>>+    private:

>>>+      __hashtable_alloc& _M_h;

>>>+      __node_type* _M_node;

>>>+    };

>>

>>This can be defined as a member of _Hashtable, and can be much

>>simpler:

>>

>>     // Simple RAII type for managing a node containing an element

>>     struct _Scoped_node

>>     {

>>    _Scoped_node(__hashtable_alloc* __h, __node_type* __n)

>>    : _M_h(__h), _M_node(__n) { }

>>

>>    ~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };

>>

>>    _Scoped_node(const _Scoped_node&) = delete;

>>    _Scoped_node& operator=(const _Scoped_node&) = delete;

>>

>>    __hashtable_alloc* _M_h;

>>    __node_type* _M_node;

>>     };

>>

>>I've attached a patch implementing these suggested changes. I think

>>it's simpler this way, what do you think?

>

>I think it is much better than what I proposed.

>

>

>>

>>(This patch is generated with 'git diff -b' so whitespace changes

>>aren't shown, so this shouldn't be applied directly. I can send

>>another version with the whitespace changes that is suitable to

>>commit).

>>

>Mine was generated with 'git diff -w' for the same reason.

>

>You can send the full patch to me if you prefer but feel free to 

>commit it yourself.


Here's what I've committed. Compared to the previous patch this adds a
constructor to _Scoped_node that creates the new node, so the caller
doesn't have to create it:

       // Allocate a node and construct an element within it.
       template<typename... _Args>
         _Scoped_node(__hashtable_alloc* __h, _Args&&... __args)
         : _M_h(__h),
           _M_node(__h->_M_allocate_node(std::forward<_Args>(__args)...))
         { }

Tested x86_64-linux, committed to trunk.
commit 65b6d828762c2568c9de850a9a4943d5ef5f87f1
Author: Jonathan Wakely <jwakely@redhat.com>
Date:   Mon Jun 17 10:02:19 2019 +0100

    Simplify node ownership in _Hashtable members
    
    Introduce an RAII type to manage nodes in unordered containers while
    they are being inserted. If the caller always owns a node until it is
    inserted, then the insertion functions don't need to deallocate on
    failure. This allows a FIXME in the node re-insertion API to be removed.
    
    Also change extract(const key_type&) to not call extract(const_iterator)
    anymore.  This avoids looping through the bucket nodes again to find the
    node before the one being extracted.
    
    2019-06-17  Fran??ois Dumont  <fdumont@gcc.gnu.org>
                Jonathan Wakely  <jwakely@redhat.com>
    
            * include/bits/hashtable.h (struct _Hashtable::_Scoped_node): New type.
            (_Hashtable::_M_insert_unique_node): Add key_type parameter. Don't
            deallocate node if insertion fails.
            (_Hashtable::_M_insert_multi_node): Likewise.
            (_Hashtable::_M_reinsert_node): Pass additional key argument.
            (_Hashtable::_M_reinsert_node_multi): Likewise. Remove FIXME.
            (_Hashtable::_M_extract_node(size_t, __node_base*)): New function.
            (_Hashtable::extract(const_iterator)): Use _M_extract_node.
            (_Hashtable::extract(const _Key&)): Likewise.
            (_Hashtable::_M_merge_unique): Pass additional key argument.
            (_Hashtable::_M_emplace<Args>(true_type, Args&&...)): Likewise. Use
            _Scoped_node.
            (_Hashtable::_M_insert): Likewise.
            * include/bits/hashtable_policy.h (_Map_base::operator[]): Likewise.
            (_Hashtable_alloc): Add comments to functions with misleading names.

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index e2e3f016a35..ab579a7059e 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -256,6 +256,30 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       using __reuse_or_alloc_node_gen_t =
 	__detail::_ReuseOrAllocNode<__node_alloc_type>;
 
+      // Simple RAII type for managing a node containing an element
+      struct _Scoped_node
+      {
+	// Take ownership of a node with a constructed element.
+	_Scoped_node(__node_type* __n, __hashtable_alloc* __h)
+	: _M_h(__h), _M_node(__n) { }
+
+	// Allocate a node and construct an element within it.
+	template<typename... _Args>
+	  _Scoped_node(__hashtable_alloc* __h, _Args&&... __args)
+	  : _M_h(__h),
+	    _M_node(__h->_M_allocate_node(std::forward<_Args>(__args)...))
+	  { }
+
+	// Destroy element and deallocate node.
+	~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };
+
+	_Scoped_node(const _Scoped_node&) = delete;
+	_Scoped_node& operator=(const _Scoped_node&) = delete;
+
+	__hashtable_alloc* _M_h;
+	__node_type* _M_node;
+      };
+
       // Metaprogramming for picking apart hash caching.
       template<typename _Cond>
 	using __if_hash_cached = __or_<__not_<__hash_cached>, _Cond>;
@@ -669,17 +693,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes
-      // no element with its key already present). Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node __n with key __k and hash code __code, in bucket __bkt
+      // if no rehash (assumes no element with same key already present).
+      // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			    __node_type* __n, size_type __n_elt = 1);
+      _M_insert_unique_node(const key_type& __k, size_type __bkt,
+			    __hash_code __code, __node_type* __n,
+			    size_type __n_elt = 1);
 
-      // Insert node with hash code __code. Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node __n with key __k and hash code __code.
+      // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_multi_node(__node_type* __hint,
+      _M_insert_multi_node(__node_type* __hint, const key_type& __k,
 			   __hash_code __code, __node_type* __n);
 
       template<typename... _Args>
@@ -806,7 +831,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	    else
 	      {
 		__ret.position
-		  = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);
+		  = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);
 		__nh._M_ptr = nullptr;
 		__ret.inserted = true;
 	      }
@@ -818,33 +843,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       iterator
       _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)
       {
-	iterator __ret;
 	if (__nh.empty())
-	  __ret = end();
-	else
-	  {
-	    __glibcxx_assert(get_allocator() == __nh.get_allocator());
+	  return end();
 
-	    auto __code = this->_M_hash_code(__nh._M_key());
-	    auto __node = std::exchange(__nh._M_ptr, nullptr);
-	    // FIXME: this deallocates the node on exception.
-	    __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);
-	  }
+	__glibcxx_assert(get_allocator() == __nh.get_allocator());
+
+	const key_type& __k = __nh._M_key();
+	auto __code = this->_M_hash_code(__k);
+	auto __ret
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __nh._M_ptr);
+	__nh._M_ptr = nullptr;
 	return __ret;
       }
 
-      /// Extract a node.
+    private:
       node_type
-      extract(const_iterator __pos)
+      _M_extract_node(size_t __bkt, __node_base* __prev_n)
       {
-	__node_type* __n = __pos._M_cur;
-	size_t __bkt = _M_bucket_index(__n);
-
-	// Look for previous node to unlink it from the erased one, this
-	// is why we need buckets to contain the before begin to make
-	// this search fast.
-	__node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-
+	__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
 	if (__prev_n == _M_buckets[__bkt])
 	  _M_remove_bucket_begin(__bkt, __n->_M_next(),
 	     __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
@@ -861,14 +877,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	return { __n, this->_M_node_allocator() };
       }
 
+    public:
+      // Extract a node.
+      node_type
+      extract(const_iterator __pos)
+      {
+	size_t __bkt = _M_bucket_index(__pos._M_cur);
+	return _M_extract_node(__bkt,
+			       _M_get_previous_node(__bkt, __pos._M_cur));
+      }
+
       /// Extract a node.
       node_type
       extract(const _Key& __k)
       {
 	node_type __nh;
-	auto __pos = find(__k);
-	if (__pos != end())
-	  __nh = extract(const_iterator(__pos));
+	__hash_code __code = this->_M_hash_code(__k);
+	std::size_t __bkt = _M_bucket_index(__k, __code);
+	if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, __code))
+	  __nh = _M_extract_node(__bkt, __prev_node);
 	return __nh;
       }
 
@@ -885,13 +912,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)
 	    {
 	      auto __pos = __i++;
-	      const key_type& __k = this->_M_extract()(__pos._M_cur->_M_v());
+	      const key_type& __k = this->_M_extract()(*__pos);
 	      __hash_code __code = this->_M_hash_code(__k);
 	      size_type __bkt = _M_bucket_index(__k, __code);
 	      if (_M_find_node(__bkt, __k, __code) == nullptr)
 		{
 		  auto __nh = __src.extract(__pos);
-		  _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);
+		  _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr,
+					__n_elt);
 		  __nh._M_ptr = nullptr;
 		  __n_elt = 1;
 		}
@@ -1634,31 +1662,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> pair<iterator, bool>
       {
 	// First build the node to get access to the hash code
-	__node_type* __node
-	  = this->_M_allocate_node(std::forward<_Args>(__args)...);
-	const key_type& __k = this->_M_extract()(__node->_M_v());
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(__k);
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
-
+	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
+	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
 	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  {
-	    // There is already an equivalent node, no insertion
-	    this->_M_deallocate_node(__node);
-	    return std::make_pair(iterator(__p), false);
-	  }
+	  // There is already an equivalent node, no insertion
+	  return std::make_pair(iterator(__p), false);
 
 	// Insert the node
-	return std::make_pair(_M_insert_unique_node(__bkt, __code, __node),
-			      true);
+	auto __pos = _M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return { __pos, true };
       }
 
   template<typename _Key, typename _Value,
@@ -1673,21 +1688,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> iterator
       {
 	// First build the node to get its hash code.
-	__node_type* __node =
-	  this->_M_allocate_node(std::forward<_Args>(__args)...);
+	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(this->_M_extract()(__node->_M_v()));
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
-
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	__hash_code __code = this->_M_hash_code(__k);
+	auto __pos
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return __pos;
       }
 
   template<typename _Key, typename _Value,
@@ -1697,8 +1705,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			  __node_type* __node, size_type __n_elt)
+    _M_insert_unique_node(const key_type& __k, size_type __bkt,
+			  __hash_code __code, __node_type* __node,
+			  size_type __n_elt)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1706,31 +1715,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count,
 					  __n_elt);
 
-      __try
+      if (__do_rehash.first)
 	{
-	  if (__do_rehash.first)
-	    {
-	      _M_rehash(__do_rehash.second, __saved_state);
-	      __bkt
-		= _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
-	    }
-
-	  this->_M_store_code(__node, __code);
-
-	  // Always insert at the beginning of the bucket.
-	  _M_insert_bucket_begin(__bkt, __node);
-	  ++_M_element_count;
-	  return iterator(__node);
-	}
-      __catch(...)
-	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
+	  _M_rehash(__do_rehash.second, __saved_state);
+	  __bkt = _M_bucket_index(__k, __code);
 	}
+
+      this->_M_store_code(__node, __code);
+
+      // Always insert at the beginning of the bucket.
+      _M_insert_bucket_begin(__bkt, __node);
+      ++_M_element_count;
+      return iterator(__node);
     }
 
-  // Insert node, in bucket bkt if no rehash (assumes no element with its key
-  // already present). Take ownership of the node, deallocate it on exception.
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
@@ -1738,60 +1736,50 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_multi_node(__node_type* __hint, __hash_code __code,
-			 __node_type* __node)
+    _M_insert_multi_node(__node_type* __hint, const key_type& __k,
+			 __hash_code __code, __node_type* __node)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
       std::pair<bool, std::size_t> __do_rehash
 	= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count, 1);
 
-      __try
-	{
-	  if (__do_rehash.first)
-	    _M_rehash(__do_rehash.second, __saved_state);
+      if (__do_rehash.first)
+	_M_rehash(__do_rehash.second, __saved_state);
 
-	  this->_M_store_code(__node, __code);
-	  const key_type& __k = this->_M_extract()(__node->_M_v());
-	  size_type __bkt = _M_bucket_index(__k, __code);
+      this->_M_store_code(__node, __code);
+      size_type __bkt = _M_bucket_index(__k, __code);
 
-	  // Find the node before an equivalent one or use hint if it exists and
-	  // if it is equivalent.
-	  __node_base* __prev
-	    = __builtin_expect(__hint != nullptr, false)
-	      && this->_M_equals(__k, __code, __hint)
-		? __hint
-		: _M_find_before_node(__bkt, __k, __code);
-	  if (__prev)
-	    {
-	      // Insert after the node before the equivalent one.
-	      __node->_M_nxt = __prev->_M_nxt;
-	      __prev->_M_nxt = __node;
-	      if (__builtin_expect(__prev == __hint, false))
-	      	// hint might be the last bucket node, in this case we need to
-	      	// update next bucket.
-	      	if (__node->_M_nxt
-	      	    && !this->_M_equals(__k, __code, __node->_M_next()))
-	      	  {
-	      	    size_type __next_bkt = _M_bucket_index(__node->_M_next());
-	      	    if (__next_bkt != __bkt)
-	      	      _M_buckets[__next_bkt] = __node;
-	      	  }
-	    }
-	  else
-	    // The inserted node has no equivalent in the
-	    // hashtable. We must insert the new node at the
-	    // beginning of the bucket to preserve equivalent
-	    // elements' relative positions.
-	    _M_insert_bucket_begin(__bkt, __node);
-	  ++_M_element_count;
-	  return iterator(__node);
-	}
-      __catch(...)
+      // Find the node before an equivalent one or use hint if it exists and
+      // if it is equivalent.
+      __node_base* __prev
+	= __builtin_expect(__hint != nullptr, false)
+	  && this->_M_equals(__k, __code, __hint)
+	    ? __hint
+	    : _M_find_before_node(__bkt, __k, __code);
+      if (__prev)
 	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
+	  // Insert after the node before the equivalent one.
+	  __node->_M_nxt = __prev->_M_nxt;
+	  __prev->_M_nxt = __node;
+	  if (__builtin_expect(__prev == __hint, false))
+	    // hint might be the last bucket node, in this case we need to
+	    // update next bucket.
+	    if (__node->_M_nxt
+		&& !this->_M_equals(__k, __code, __node->_M_next()))
+	      {
+		size_type __next_bkt = _M_bucket_index(__node->_M_next());
+		if (__next_bkt != __bkt)
+		  _M_buckets[__next_bkt] = __node;
+	      }
 	}
+      else
+	// The inserted node has no equivalent in the hashtable. We must
+	// insert the new node at the beginning of the bucket to preserve
+	// equivalent elements' relative positions.
+	_M_insert_bucket_begin(__bkt, __node);
+      ++_M_element_count;
+      return iterator(__node);
     }
 
   // Insert v if no element with its key is already present.
@@ -1811,12 +1799,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
 
-	__node_type* __n = _M_find_node(__bkt, __k, __code);
-	if (__n)
-	  return std::make_pair(iterator(__n), false);
+	if (__node_type* __node = _M_find_node(__bkt, __k, __code))
+	  return { iterator(__node), false };
 
-	__n = __node_gen(std::forward<_Arg>(__v));
-	return { _M_insert_unique_node(__bkt, __code, __n, __n_elt), true };
+	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
+	auto __pos
+	  = _M_insert_unique_node(__k, __bkt, __code, __node._M_node, __n_elt);
+	__node._M_node = nullptr;
+	return { __pos, true };
       }
 
   // Insert v unconditionally.
@@ -1837,9 +1827,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(this->_M_extract()(__v));
 
 	// Second allocate new node so that we don't rehash if it throws.
-	__node_type* __node = __node_gen(std::forward<_Arg>(__v));
-
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
+	auto __pos
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return __pos;
       }
 
   template<typename _Key, typename _Value,
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 878154ae210..f5809c7443a 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -706,17 +706,19 @@ namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
-				      std::tuple<const key_type&>(__k),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
-
-      return __p->_M_v().second;
+      typename __hashtable::_Scoped_node __node {
+	__h,
+	std::piecewise_construct,
+	std::tuple<const key_type&>(__k),
+	std::tuple<>()
+      };
+      auto __pos
+	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+      __node._M_node = nullptr;
+      return __pos->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -731,17 +733,19 @@ namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
-				      std::forward_as_tuple(std::move(__k)),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
-
-      return __p->_M_v().second;
+      typename __hashtable::_Scoped_node __node {
+	__h,
+	std::piecewise_construct,
+	std::forward_as_tuple(std::move(__k)),
+	std::tuple<>()
+      };
+      auto __pos
+	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+      __node._M_node = nullptr;
+      return __pos->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -1972,8 +1976,8 @@ namespace __detail
     }
 
   /**
-   * This type deals with all allocation and keeps an allocator instance through
-   * inheritance to benefit from EBO when possible.
+   * This type deals with all allocation and keeps an allocator instance
+   * through inheritance to benefit from EBO when possible.
    */
   template<typename _NodeAlloc>
     struct _Hashtable_alloc : private _Hashtable_ebo_helper<0, _NodeAlloc>
@@ -2012,17 +2016,21 @@ namespace __detail
       _M_node_allocator() const
       { return __ebo_node_alloc::_M_cget(); }
 
+      // Allocate a node and construct an element within it.
       template<typename... _Args>
 	__node_type*
 	_M_allocate_node(_Args&&... __args);
 
+      // Destroy the element within a node and deallocate the node.
       void
       _M_deallocate_node(__node_type* __n);
 
+      // Deallocate a node.
       void
       _M_deallocate_node_ptr(__node_type* __n);
 
-      // Deallocate the linked list of nodes pointed to by __n
+      // Deallocate the linked list of nodes pointed to by __n.
+      // The elements within the nodes are destroyed.
       void
       _M_deallocate_nodes(__node_type* __n);
François Dumont June 18, 2019, 5:52 a.m. | #7
A small regression noticed while merging.

We shouldn't keep on using a moved-from key_type instance.

Ok to commit ? Feel free to do it if you prefer, I'll do so at end of 
Europe day otherwise.

François


On 6/17/19 12:28 PM, Jonathan Wakely wrote:
> On 07/06/19 18:39 +0200, François Dumont wrote:

>> On 6/5/19 6:22 PM, Jonathan Wakely wrote:

>>> On 04/06/19 19:19 +0200, François Dumont wrote:

>>>> Hi

>>>>

>>>>     Here is a patch to enhance the _Hashtable extract node API and 

>>>> fix a FIXME request.

>>>>

>>>>     The enhancement to the extract node Api is that extract(const 

>>>> key_type&) do not call extract(const_iterator) anymore. Doing so we 

>>>> had to loop again through bucket nodes to find the previous node to 

>>>> the one to extract. Even if a bucket shall not contain many nodes 

>>>> (in unique key mode) it's easy to avoid it.

>>>

>>> Nice.

>>>

>>>>     To fix the FIXME I introduced a node smart pointer type 

>>>> managing the node lifetime. The node is extracted from this smart 

>>>> pointer only when there can't be any exception raised. In the 

>>>> context of the node extract api the node handle is considered as a 

>>>> smart pointer. So the node handle will remain owner of the node in 

>>>> case of exception when reinserting it, I hope it is the expected 

>>>> behavior.

>>>

>>> Yes, that's right.

>>>

>>> I was going to suggest just using the node handle type instead of

>>> inventing a new smart pointer, but the handle type uses std::optional

>>> so isn't available for C++11/14.

>>

>> I considered it too, or even use shared_ptr but thought it was overkill.

>>

>>>

>>>

>>>>     * include/bits/hashtable_policy.h

>>>>     (struct _NodeSmartPointer<_NodeAlloc>): New.

>>>>     (_Map_base<>::operator[](const key_type&)): Use latter, adapt.

>>>>     (_Map_base<>::operator[](key_type&&)): Likewise.

>>>>     * include/bits/hashtable.h

>>>>     (_Hashtable<>::__node_sp_t): New.

>>>>     (_Hashtable<>::_M_insert_unique_node(size_type, __hash_code,

>>>>     __node_type*, size_type)): Replace by...

>>>> (_Hashtable<>::_M_insert_unique_node<_NodeAccessor>(const key_type&,

>>>>     size_type, __hash_code, const _NodeAccessor&, size_type)): 

>>>> ...that.

>>>>     (_Hashtable<>::_M_insert_multi_node(__node_type*, __hash_code,

>>>>     __node_type*)): Replace by...

>>>> (_Hashtable<>::_M_insert_multi_node<_NodeAccessor>(__node_type*,

>>>>     __hash_code, const _NodeAccessor&)): ...that.

>>>>     (_Hashtable<>::_M_reinsert_node): Adapt.

>>>>     (_Hashtable<>::_M_reinsert_node_multi): Adapt.

>>>>     (_Hashtable<>::_M_extract_node(size_t, __node_base*)): New.

>>>>     (_Hashtable<>::extract(const_iterator)): Use latter.

>>>>     (_Hashtable<>::extract(const _Key&)): Likewise.

>>>>     (_Hashtable<>::_M_merge_unique): Adapt.

>>>>     (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Adapt.

>>>> (_Hashtable<>::_M_emplace<_Args>(const_iterator, false_type,

>>>>     _Args&&...)): Adapt.

>>>>

>>>> Tested under Linux x86_64.

>>>>

>>>> Ok to commit ?

>>>>

>>>> François

>>>>

>>>

>>>> diff --git a/libstdc++-v3/include/bits/hashtable.h 

>>>> b/libstdc++-v3/include/bits/hashtable.h

>>>> index e2e3f016a35..307865b96bf 100644

>>>> --- a/libstdc++-v3/include/bits/hashtable.h

>>>> +++ b/libstdc++-v3/include/bits/hashtable.h

>>>> @@ -197,6 +197,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>       using __hash_cached = typename __traits_type::__hash_cached;

>>>>       using __node_type = __detail::_Hash_node<_Value, 

>>>> __hash_cached::value>;

>>>>       using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;

>>>> +      using __node_sp_t = 

>>>> __detail::_NodeSmartPointer<__node_alloc_type>;

>>>>

>>>>       using __hashtable_alloc = 

>>>> __detail::_Hashtable_alloc<__node_alloc_type>;

>>>>

>>>> @@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>       __node_base*

>>>>       _M_get_previous_node(size_type __bkt, __node_base* __n);

>>>>

>>>> -      // Insert node with hash code __code, in bucket bkt if no 

>>>> rehash (assumes

>>>> -      // no element with its key already present). Take ownership 

>>>> of the node,

>>>> -      // deallocate it on exception.

>>>> +      // Insert node with key __k and hash code __code, in bucket 

>>>> __bkt if no

>>>> +      // rehash (assumes no element with its key already present).

>>>> +      template<typename _NodeAccessor>

>>>>     iterator

>>>> -      _M_insert_unique_node(size_type __bkt, __hash_code __code,

>>>> -                __node_type* __n, size_type __n_elt = 1);

>>>> +    _M_insert_unique_node(const key_type& __k, size_type __bkt,

>>>> +                  __hash_code __code, const _NodeAccessor&,

>>>> +                  size_type __n_elt = 1);

>>>>

>>>> -      // Insert node with hash code __code. Take ownership of the 

>>>> node,

>>>> -      // deallocate it on exception.

>>>> +      // Insert node with hash code __code.

>>>> +      template<typename _NodeAccessor>

>>>>     iterator

>>>> -      _M_insert_multi_node(__node_type* __hint,

>>>> -               __hash_code __code, __node_type* __n);

>>>> +    _M_insert_multi_node(__node_type* __hint, __hash_code __code,

>>>> +                 const _NodeAccessor& __node_accessor);

>>>

>>> It looks like most times you call these functions you pass an

>>> identical lambda expression, but each of those lambda expressions will

>>> create a unique type. That means you create different instantiations

>>> of the function templates even though they do exactly the same thing.

>>>

>>> That's just generating multiple copies of identical code. Passing in a

>>> function object to provide the node pointer doesn't really seem

>>> necessary anyway, so if it results in larger executables it's really

>>> not desirable.

>>

>>

>> Maybe one the reasons we have a difference when running performance 

>> tests in -03. I don't know why I abused so much of lambdas when I see 

>> your proposal.

>>

>>

>>>

>>> The attached patch still just passes in a node pointer (which the

>>> function takes ownership of, unless it throws). Because the semantics

>>> of _M_insert_multi_node change, we need the symbol name to change as

>>> well (so old code linking to the old symbol doesn't find the new

>>> definition with different semantics). Passing in the key makes it a

>>> different symbol (in almost all cases the caller has the key available

>>> already anyway).

>> Very nice.

>>>

>>> An alternative design would be for _M_insert_(unique|multi)_node to

>>> take a reference to the RAII type and set its _M_node member to null

>>> after taking ownership of that pointer. That would be more explicit

>>> about the ownership transfer, however it would require changes to the

>>> _M_reinsert_node and _M_reinsert_node_multi functions which don't use

>>> the RAII type (because the __node_type* is already owned by a node

>>> handle).

>>>

>>>>

>>>>       template<typename... _Args>

>>>>     std::pair<iterator, bool>

>>>> @@ -805,9 +807,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>           }

>>>>         else

>>>>           {

>>>> -        __ret.position

>>>> -          = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);

>>>> -        __nh._M_ptr = nullptr;

>>>> +        __ret.position = _M_insert_unique_node(__k, __bkt, __code,

>>>> +                [&__nh]()

>>>> +                { return std::exchange(__nh._M_ptr, nullptr); });

>>>

>>> The existing code here can just be changed to pass in __k, but still

>>> set __nh._M_ptr to null after the call returns.

>>>

>>> i.e.

>>>

>>>         __ret.position

>>>           = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);

>>>         __nh._M_ptr = nullptr;

>>>

>>>

>>>>         __ret.inserted = true;

>>>>           }

>>>>       }

>>>> @@ -818,33 +820,23 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>       iterator

>>>>       _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)

>>>>       {

>>>> -    iterator __ret;

>>>>     if (__nh.empty())

>>>> -      __ret = end();

>>>> -    else

>>>> -      {

>>>> +      return end();

>>>> +

>>>>     __glibcxx_assert(get_allocator() == __nh.get_allocator());

>>>>

>>>>     auto __code = this->_M_hash_code(__nh._M_key());

>>>> -        auto __node = std::exchange(__nh._M_ptr, nullptr);

>>>> -        // FIXME: this deallocates the node on exception.

>>>> -        __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);

>>>> -      }

>>>> -    return __ret;

>>>> +    return _M_insert_multi_node(__hint._M_cur, __code,

>>>> +                  [&__nh]()

>>>> +                  { return std::exchange(__nh._M_ptr, nullptr); });

>>>

>>> Now that the call won't deallocate anything, we can fix the FIXME by

>>> just delaying resetting __nh._M_ptr until the call has returned:

>>>

>>>        auto __ret = _M_insert_multi_node(__hint._M_cur, __code, 

>>> __node);

>>>        __nh._M_ptr = nullptr;

>>>        return __ret;

>>>

>>>

>>>>       }

>>>>

>>>> +    private:

>>>>       /// Extract a node.

>>>

>>> We don't want a Doxygen "///" comment on a private function, just "//"

>>> is OK. In this case I don't think we need any comment, the

>>> _M_extract_node name is clear enough. Saying "extract a node" adds

>>> nothing.

>>>

>>>

>>>>       node_type

>>>> -      extract(const_iterator __pos)

>>>> +      _M_extract_node(size_t __bkt, __node_base* __prev_n)

>>>>       {

>>>> -    __node_type* __n = __pos._M_cur;

>>>> -    size_t __bkt = _M_bucket_index(__n);

>>>> -

>>>> -    // Look for previous node to unlink it from the erased one, this

>>>> -    // is why we need buckets to contain the before begin to make

>>>> -    // this search fast.

>>>> -    __node_base* __prev_n = _M_get_previous_node(__bkt, __n);

>>>> -

>>>> +    __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);

>>>>     if (__prev_n == _M_buckets[__bkt])

>>>>       _M_remove_bucket_begin(__bkt, __n->_M_next(),

>>>>          __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);

>>>> @@ -861,14 +853,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>     return { __n, this->_M_node_allocator() };

>>>>       }

>>>>

>>>> +    public:

>>>> +      /// Extract a node.

>>>> +      node_type

>>>> +      extract(const_iterator __pos)

>>>> +      {

>>>> +    size_t __bkt = _M_bucket_index(__pos._M_cur);

>>>> +    return _M_extract_node(__bkt, _M_get_previous_node(__bkt, 

>>>> __pos._M_cur));

>>>> +      }

>>>> +

>>>>       /// Extract a node.

>>>>       node_type

>>>>       extract(const _Key& __k)

>>>>       {

>>>>     node_type __nh;

>>>> -    auto __pos = find(__k);

>>>> -    if (__pos != end())

>>>> -      __nh = extract(const_iterator(__pos));

>>>> +    __hash_code __code = this->_M_hash_code(__k);

>>>> +    std::size_t __bkt = _M_bucket_index(__k, __code);

>>>> +    if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, 

>>>> __code))

>>>> +      __nh = _M_extract_node(__bkt, __prev_node);

>>>>     return __nh;

>>>>       }

>>>

>>> Looks good.

>>>

>>>> @@ -885,14 +887,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>       for (auto __i = __src.begin(), __end = __src.end(); __i != 

>>>> __end;)

>>>>         {

>>>>           auto __pos = __i++;

>>>> -          const key_type& __k = 

>>>> this->_M_extract()(__pos._M_cur->_M_v());

>>>> +          const key_type& __k = this->_M_extract()(*__pos);

>>>>           __hash_code __code = this->_M_hash_code(__k);

>>>>           size_type __bkt = _M_bucket_index(__k, __code);

>>>>           if (_M_find_node(__bkt, __k, __code) == nullptr)

>>>>         {

>>>>           auto __nh = __src.extract(__pos);

>>>> -          _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);

>>>> -          __nh._M_ptr = nullptr;

>>>> +          _M_insert_unique_node(__k, __bkt, __code,

>>>> +                [&__nh]()

>>>> +                { return std::exchange(__nh._M_ptr, nullptr); },

>>>> +                __n_elt);

>>>

>>> Again, the old code seems fine. Just pass in __k.

>>>

>>>>           __n_elt = 1;

>>>>         }

>>>>           else if (__n_elt != 1)

>>>> @@ -1634,31 +1638,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

>>>>       -> pair<iterator, bool>

>>>>       {

>>>>     // First build the node to get access to the hash code

>>>> -    __node_type* __node

>>>> -      = this->_M_allocate_node(std::forward<_Args>(__args)...);

>>>> -    const key_type& __k = this->_M_extract()(__node->_M_v());

>>>> -    __hash_code __code;

>>>> -    __try

>>>> -      {

>>>> -        __code = this->_M_hash_code(__k);

>>>> -      }

>>>> -    __catch(...)

>>>> -      {

>>>> -        this->_M_deallocate_node(__node);

>>>

>>> Aside: It's confusing that _M_allocate_node actually allocates a node

>>> and constructs an element, and _M_deallocate_node destroys the element

>>> and deallocates the node (and we have _M_deallocate_node_ptr which

>>> just does the deallocation part). We should add comments explaining

>>> that.

>>>

>>>

>>>> diff --git a/libstdc++-v3/include/bits/hashtable_policy.h 

>>>> b/libstdc++-v3/include/bits/hashtable_policy.h

>>>> index 878154ae210..ddcff0ec9d0 100644

>>>> --- a/libstdc++-v3/include/bits/hashtable_policy.h

>>>> +++ b/libstdc++-v3/include/bits/hashtable_policy.h

>>>> @@ -170,6 +170,40 @@ namespace __detail

>>>>       __hashtable_alloc& _M_h;

>>>>     };

>>>>

>>>> +  // Node smart pointer to benefit from RAII.

>>>> +  template<typename _NodeAlloc>

>>>> +    struct _NodeSmartPointer

>>>> +    {

>>>> +    private:

>>>> +      using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;

>>>> +      using __node_type = typename __hashtable_alloc::__node_type;

>>>> +

>>>> +    public:

>>>> +      _NodeSmartPointer(__hashtable_alloc& __h, __node_type* 

>>>> __node) noexcept

>>>> +      : _M_h(__h), _M_node(__node) { }

>>>> +

>>>> +      ~_NodeSmartPointer()

>>>> +      {

>>>> +    if (_M_node)

>>>> +      _M_h._M_deallocate_node(_M_node);

>>>> +      }

>>>> +

>>>> +      __node_type*

>>>> +      _M_get() const { return _M_node; }

>>>> +

>>>> +      __node_type*

>>>> +      _M_release() noexcept

>>>> +      {

>>>> +    __node_type* __tmp = _M_node;

>>>> +    _M_node = nullptr;

>>>> +    return __tmp;

>>>> +      }

>>>> +

>>>> +    private:

>>>> +      __hashtable_alloc& _M_h;

>>>> +      __node_type* _M_node;

>>>> +    };

>>>

>>> This can be defined as a member of _Hashtable, and can be much

>>> simpler:

>>>

>>>      // Simple RAII type for managing a node containing an element

>>>      struct _Scoped_node

>>>      {

>>>     _Scoped_node(__hashtable_alloc* __h, __node_type* __n)

>>>     : _M_h(__h), _M_node(__n) { }

>>>

>>>     ~_Scoped_node() { if (_M_node) 

>>> _M_h->_M_deallocate_node(_M_node); };

>>>

>>>     _Scoped_node(const _Scoped_node&) = delete;

>>>     _Scoped_node& operator=(const _Scoped_node&) = delete;

>>>

>>>     __hashtable_alloc* _M_h;

>>>     __node_type* _M_node;

>>>      };

>>>

>>> I've attached a patch implementing these suggested changes. I think

>>> it's simpler this way, what do you think?

>>

>> I think it is much better than what I proposed.

>>

>>

>>>

>>> (This patch is generated with 'git diff -b' so whitespace changes

>>> aren't shown, so this shouldn't be applied directly. I can send

>>> another version with the whitespace changes that is suitable to

>>> commit).

>>>

>> Mine was generated with 'git diff -w' for the same reason.

>>

>> You can send the full patch to me if you prefer but feel free to 

>> commit it yourself.

>

> Here's what I've committed. Compared to the previous patch this adds a

> constructor to _Scoped_node that creates the new node, so the caller

> doesn't have to create it:

>

>       // Allocate a node and construct an element within it.

>       template<typename... _Args>

>         _Scoped_node(__hashtable_alloc* __h, _Args&&... __args)

>         : _M_h(__h),

> _M_node(__h->_M_allocate_node(std::forward<_Args>(__args)...))

>         { }

>

> Tested x86_64-linux, committed to trunk.

>

>
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index f5809c7443a..7e89e1b44c4 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -743,7 +743,8 @@ namespace __detail
 	std::tuple<>()
       };
       auto __pos
-	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	= __h->_M_insert_unique_node(__h->_M_extract()(__node._M_node->_M_v()),
+				     __bkt, __code, __node._M_node);
       __node._M_node = nullptr;
       return __pos->second;
     }
Jonathan Wakely June 18, 2019, 10:54 a.m. | #8
On 18/06/19 07:52 +0200, François Dumont wrote:
>A small regression noticed while merging.

>

>We shouldn't keep on using a moved-from key_type instance.

>

>Ok to commit ? Feel free to do it if you prefer, I'll do so at end of 

>Europe day otherwise.



>diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h

>index f5809c7443a..7e89e1b44c4 100644

>--- a/libstdc++-v3/include/bits/hashtable_policy.h

>+++ b/libstdc++-v3/include/bits/hashtable_policy.h

>@@ -743,7 +743,8 @@ namespace __detail

> 	std::tuple<>()

>       };

>       auto __pos

>-	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);

>+	= __h->_M_insert_unique_node(__h->_M_extract()(__node._M_node->_M_v()),

>+				     __bkt, __code, __node._M_node);

>       __node._M_node = nullptr;

>       return __pos->second;

>     }


I can't create an example where this causes a problem, because the key
passed to _M_insert_unique_node is never used. So it doesn't matter
that it's been moved from.

So I have to wonder why we just added the key parameter to that
function, if it's never used.

As far as I can tell, it would only be used for a non-default range
hash function, and I don't care about that. Frankly I find the
policy-based _Hashtable completely unmaintainable and I'd gladly get
rid of all of it that isn't needed for the std::unordered_xxx
containers. The non-standard extensions are not used by anybody,
apparently not tested properly (or this regression should have been
noticed) and make the code too complicated.

We're adding new parameters that have to be passed around even though
they're never used by 99.999% of users. No wonder the code is only
fast at -O3.

</rant>
François Dumont June 18, 2019, 8:42 p.m. | #9
On 6/18/19 12:54 PM, Jonathan Wakely wrote:
> On 18/06/19 07:52 +0200, François Dumont wrote:

>> A small regression noticed while merging.

>>

>> We shouldn't keep on using a moved-from key_type instance.

>>

>> Ok to commit ? Feel free to do it if you prefer, I'll do so at end of 

>> Europe day otherwise.

>

>

>> diff --git a/libstdc++-v3/include/bits/hashtable_policy.h 

>> b/libstdc++-v3/include/bits/hashtable_policy.h

>> index f5809c7443a..7e89e1b44c4 100644

>> --- a/libstdc++-v3/include/bits/hashtable_policy.h

>> +++ b/libstdc++-v3/include/bits/hashtable_policy.h

>> @@ -743,7 +743,8 @@ namespace __detail

>>     std::tuple<>()

>>       };

>>       auto __pos

>> -    = __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);

>> +    = 

>> __h->_M_insert_unique_node(__h->_M_extract()(__node._M_node->_M_v()),

>> +                     __bkt, __code, __node._M_node);

>>       __node._M_node = nullptr;

>>       return __pos->second;

>>     }

>

> I can't create an example where this causes a problem, because the key

> passed to _M_insert_unique_node is never used. So it doesn't matter

> that it's been moved from.

>

> So I have to wonder why we just added the key parameter to that

> function, if it's never used.


I think you've been influence by my patch. I was using a "_NodeAccessor" 
which wasn't giving access to the node without taking owership so I 
needed to pass the key properly to compute new bucket index in case of 
rehash.

But with your approach this change to the _M_insert_unique_node was 
simply unecessary so here is a patch to cleanup this part.

Ok to commit ?


>

> As far as I can tell, it would only be used for a non-default range

> hash function, and I don't care about that. Frankly I find the

> policy-based _Hashtable completely unmaintainable and I'd gladly get

> rid of all of it that isn't needed for the std::unordered_xxx

> containers. The non-standard extensions are not used by anybody,

> apparently not tested properly (or this regression should have been

> noticed) and make the code too complicated.

I never consider removing this but it would indeed make maintainance 
easy. I'll add to my TODO.
>

> We're adding new parameters that have to be passed around even though

> they're never used by 99.999% of users. No wonder the code is only

> fast at -O3.

>

> </rant>

>

>

>
diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index ab579a7059e..41edafaa2e3 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -693,11 +693,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-      // Insert node __n with key __k and hash code __code, in bucket __bkt
-      // if no rehash (assumes no element with same key already present).
+      // Insert node __n with hash code __code, in bucket __bkt if no
+      // rehash (assumes no element with same key already present).
       // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_unique_node(const key_type& __k, size_type __bkt,
+      _M_insert_unique_node(size_type __bkt,
 			    __hash_code __code, __node_type* __n,
 			    size_type __n_elt = 1);
 
@@ -831,7 +831,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	    else
 	      {
 		__ret.position
-		  = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);
+		  = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);
 		__nh._M_ptr = nullptr;
 		__ret.inserted = true;
 	      }
@@ -918,8 +918,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	      if (_M_find_node(__bkt, __k, __code) == nullptr)
 		{
 		  auto __nh = __src.extract(__pos);
-		  _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr,
-					__n_elt);
+		  _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);
 		  __nh._M_ptr = nullptr;
 		  __n_elt = 1;
 		}
@@ -1671,7 +1670,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  return std::make_pair(iterator(__p), false);
 
 	// Insert the node
-	auto __pos = _M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	auto __pos = _M_insert_unique_node(__bkt, __code, __node._M_node);
 	__node._M_node = nullptr;
 	return { __pos, true };
       }
@@ -1705,9 +1704,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_unique_node(const key_type& __k, size_type __bkt,
-			  __hash_code __code, __node_type* __node,
-			  size_type __n_elt)
+    _M_insert_unique_node(size_type __bkt, __hash_code __code,
+			  __node_type* __node, size_type __n_elt)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1718,7 +1716,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       if (__do_rehash.first)
 	{
 	  _M_rehash(__do_rehash.second, __saved_state);
-	  __bkt = _M_bucket_index(__k, __code);
+	  __bkt = _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
 	}
 
       this->_M_store_code(__node, __code);
@@ -1804,7 +1802,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 	auto __pos
-	  = _M_insert_unique_node(__k, __bkt, __code, __node._M_node, __n_elt);
+	  = _M_insert_unique_node(__bkt, __code, __node._M_node, __n_elt);
 	__node._M_node = nullptr;
 	return { __pos, true };
       }
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index f5809c7443a..6646b9f84a8 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -716,7 +716,7 @@ namespace __detail
 	std::tuple<>()
       };
       auto __pos
-	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	= __h->_M_insert_unique_node(__bkt, __code, __node._M_node);
       __node._M_node = nullptr;
       return __pos->second;
     }
@@ -743,7 +743,7 @@ namespace __detail
 	std::tuple<>()
       };
       auto __pos
-	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	= __h->_M_insert_unique_node(__bkt, __code, __node._M_node);
       __node._M_node = nullptr;
       return __pos->second;
     }
Jonathan Wakely June 18, 2019, 10:47 p.m. | #10
On 18/06/19 22:42 +0200, François Dumont wrote:
>On 6/18/19 12:54 PM, Jonathan Wakely wrote:

>>On 18/06/19 07:52 +0200, François Dumont wrote:

>>>A small regression noticed while merging.

>>>

>>>We shouldn't keep on using a moved-from key_type instance.

>>>

>>>Ok to commit ? Feel free to do it if you prefer, I'll do so at end 

>>>of Europe day otherwise.

>>

>>

>>>diff --git a/libstdc++-v3/include/bits/hashtable_policy.h 

>>>b/libstdc++-v3/include/bits/hashtable_policy.h

>>>index f5809c7443a..7e89e1b44c4 100644

>>>--- a/libstdc++-v3/include/bits/hashtable_policy.h

>>>+++ b/libstdc++-v3/include/bits/hashtable_policy.h

>>>@@ -743,7 +743,8 @@ namespace __detail

>>>    std::tuple<>()

>>>      };

>>>      auto __pos

>>>-    = __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);

>>>+    = __h->_M_insert_unique_node(__h->_M_extract()(__node._M_node->_M_v()),

>>>+                     __bkt, __code, __node._M_node);

>>>      __node._M_node = nullptr;

>>>      return __pos->second;

>>>    }

>>

>>I can't create an example where this causes a problem, because the key

>>passed to _M_insert_unique_node is never used. So it doesn't matter

>>that it's been moved from.

>>

>>So I have to wonder why we just added the key parameter to that

>>function, if it's never used.

>

>I think you've been influence by my patch. I was using a 

>"_NodeAccessor" which wasn't giving access to the node without taking 

>owership so I needed to pass the key properly to compute new bucket 

>index in case of rehash.

>

>But with your approach this change to the _M_insert_unique_node was 

>simply unecessary so here is a patch to cleanup this part.


Ha! I see, thanks. So I should have removed that key_type parameter
again after removing the NodeAccessor stuff.


>Ok to commit ?


No, because that would restore the original signature of the
_M_insert_unique_node function, but it has changed contract. Old
callers who expect that function to delete the node would now leak
memory if an exception is thrown.

If we change the contract of the function we need to change its
mangled name, so that callers expecting the old contract will not use
the new function.

I'll think about the best way to do that ...
François Dumont June 20, 2019, 8:40 p.m. | #11
On 6/19/19 12:47 AM, Jonathan Wakely wrote:
> On 18/06/19 22:42 +0200, François Dumont wrote:

>> On 6/18/19 12:54 PM, Jonathan Wakely wrote:

>>> On 18/06/19 07:52 +0200, François Dumont wrote:

>>>> A small regression noticed while merging.

>>>>

>>>> We shouldn't keep on using a moved-from key_type instance.

>>>>

>>>> Ok to commit ? Feel free to do it if you prefer, I'll do so at end 

>>>> of Europe day otherwise.

>>>

>>>

>>>> diff --git a/libstdc++-v3/include/bits/hashtable_policy.h 

>>>> b/libstdc++-v3/include/bits/hashtable_policy.h

>>>> index f5809c7443a..7e89e1b44c4 100644

>>>> --- a/libstdc++-v3/include/bits/hashtable_policy.h

>>>> +++ b/libstdc++-v3/include/bits/hashtable_policy.h

>>>> @@ -743,7 +743,8 @@ namespace __detail

>>>>     std::tuple<>()

>>>>       };

>>>>       auto __pos

>>>> -    = __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);

>>>> +    = 

>>>> __h->_M_insert_unique_node(__h->_M_extract()(__node._M_node->_M_v()),

>>>> +                     __bkt, __code, __node._M_node);

>>>>       __node._M_node = nullptr;

>>>>       return __pos->second;

>>>>     }

>>>

>>> I can't create an example where this causes a problem, because the key

>>> passed to _M_insert_unique_node is never used. So it doesn't matter

>>> that it's been moved from.

>>>

>>> So I have to wonder why we just added the key parameter to that

>>> function, if it's never used.

>>

>> I think you've been influence by my patch. I was using a 

>> "_NodeAccessor" which wasn't giving access to the node without taking 

>> owership so I needed to pass the key properly to compute new bucket 

>> index in case of rehash.

>>

>> But with your approach this change to the _M_insert_unique_node was 

>> simply unecessary so here is a patch to cleanup this part.

>

> Ha! I see, thanks. So I should have removed that key_type parameter

> again after removing the NodeAccessor stuff.

>

>

>> Ok to commit ?

>

> No, because that would restore the original signature of the

> _M_insert_unique_node function, but it has changed contract. Old

> callers who expect that function to delete the node would now leak

> memory if an exception is thrown.

>

Oh, yes, abi, I tend to forget even if the recent PR 90920 remind me 
about that, sorry.
> If we change the contract of the function we need to change its

> mangled name, so that callers expecting the old contract will not use

> the new function.

>

> I'll think about the best way to do that ...

>

Something like what's attached ?

I still use _GLIBCXX_INLINE_VERSION to tag functions that are kept just 
for abi-compatibility.

Ok to commit after having run tests ?

François
diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index ab579a7059e..2ea75a24f1c 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -693,19 +693,35 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-      // Insert node __n with key __k and hash code __code, in bucket __bkt
-      // if no rehash (assumes no element with same key already present).
+      // Insert node __n with hash code __code, in bucket __bkt if no
+      // rehash (assumes no element with same key already present).
       // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_unique_node(const key_type& __k, size_type __bkt,
-			    __hash_code __code, __node_type* __n,
-			    size_type __n_elt = 1);
+      _M_insert_node(true_type, size_type __bkt, __hash_code,
+		     __node_type* __n, size_type __n_elt = 1);
+
+#if !_GLIBCXX_INLINE_VERSION
+      // Insert node with hash code __code, in bucket bkt if no rehash (assumes
+      // no element with its key already present). Take ownership of the node,
+      // deallocate it on exception.
+      iterator
+      _M_insert_unique_node(size_type __bkt, __hash_code __code,
+			    __node_type* __n, size_type __n_elt = 1);
+#endif
 
       // Insert node __n with key __k and hash code __code.
       // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_multi_node(__node_type* __hint, const key_type& __k,
+      _M_insert_node(false_type, __node_type* __hint,
+		     __hash_code __code, __node_type* __n);
+
+#if !_GLIBCXX_INLINE_VERSION
+      // Insert node with hash code __code. Take ownership of the node,
+      // deallocate it on exception.
+      iterator
+      _M_insert_multi_node(__node_type* __hint,
 			   __hash_code __code, __node_type* __n);
+#endif
 
       template<typename... _Args>
 	std::pair<iterator, bool>
@@ -831,7 +847,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	    else
 	      {
 		__ret.position
-		  = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);
+		  = _M_insert_node(true_type{}, __bkt, __code, __nh._M_ptr);
 		__nh._M_ptr = nullptr;
 		__ret.inserted = true;
 	      }
@@ -851,7 +867,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	const key_type& __k = __nh._M_key();
 	auto __code = this->_M_hash_code(__k);
 	auto __ret
-	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __nh._M_ptr);
+	  = _M_insert_node(false_type{}, __hint._M_cur, __code, __nh._M_ptr);
 	__nh._M_ptr = nullptr;
 	return __ret;
       }
@@ -918,8 +934,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	      if (_M_find_node(__bkt, __k, __code) == nullptr)
 		{
 		  auto __nh = __src.extract(__pos);
-		  _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr,
-					__n_elt);
+		  _M_insert_node(true_type{}, __bkt, __code, __nh._M_ptr, __n_elt);
 		  __nh._M_ptr = nullptr;
 		  __n_elt = 1;
 		}
@@ -1671,7 +1686,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  return std::make_pair(iterator(__p), false);
 
 	// Insert the node
-	auto __pos = _M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	auto __pos = _M_insert_node(true_type{}, __bkt, __code, __node._M_node);
 	__node._M_node = nullptr;
 	return { __pos, true };
       }
@@ -1693,7 +1708,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
 	__hash_code __code = this->_M_hash_code(__k);
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	  = _M_insert_node(false_type{}, __hint._M_cur, __code, __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
@@ -1705,9 +1720,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_unique_node(const key_type& __k, size_type __bkt,
-			  __hash_code __code, __node_type* __node,
-			  size_type __n_elt)
+    _M_insert_node(true_type, size_type __bkt, __hash_code __code,
+		   __node_type* __node, size_type __n_elt)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1718,7 +1732,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       if (__do_rehash.first)
 	{
 	  _M_rehash(__do_rehash.second, __saved_state);
-	  __bkt = _M_bucket_index(__k, __code);
+	  __bkt = _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
 	}
 
       this->_M_store_code(__node, __code);
@@ -1729,6 +1743,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       return iterator(__node);
     }
 
+#if !_GLIBCXX_INLINE_VERSION
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
@@ -1736,8 +1751,31 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_multi_node(__node_type* __hint, const key_type& __k,
-			 __hash_code __code, __node_type* __node)
+    _M_insert_unique_node(size_type __bkt, __hash_code __code,
+			  __node_type* __node, size_type __n_elt)
+    -> iterator
+    {
+      __try
+	{
+	  return _M_insert_node(true_type{}, __bkt, __code, __node, __n_elt);
+	}
+      __catch
+	{
+	  this->_M_deallocate_node(__node);
+	  __throw_exception_again;
+	}
+    }
+#endif
+
+  template<typename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
+    _M_insert_node(false_type, __node_type* __hint,
+		   __hash_code __code, __node_type* __node)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1748,6 +1786,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_M_rehash(__do_rehash.second, __saved_state);
 
       this->_M_store_code(__node, __code);
+      const key_type& __k = this->_M_extract()(__node->_M_v());
       size_type __bkt = _M_bucket_index(__k, __code);
 
       // Find the node before an equivalent one or use hint if it exists and
@@ -1782,6 +1821,30 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       return iterator(__node);
     }
 
+#if !_GLIBCXX_INLINE_VERSION
+  template<typename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
+    _M_insert_multi_node(__node_type* __hint,
+			 __hash_code __code, __node_type* __node)
+    -> iterator
+    {
+      __try
+	{
+	  return _M_insert_node(false_type{}, __hint, __code, __node);
+	}
+      __catch
+	{
+	  this->_M_deallocate_node(__node);
+	  __throw_exception_again;
+	}
+    }
+#endif
+
   // Insert v if no element with its key is already present.
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
@@ -1804,7 +1867,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 	auto __pos
-	  = _M_insert_unique_node(__k, __bkt, __code, __node._M_node, __n_elt);
+	  = _M_insert_node(true_type{}, __bkt, __code, __node._M_node, __n_elt);
 	__node._M_node = nullptr;
 	return { __pos, true };
       }
@@ -1830,7 +1893,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	  = _M_insert_node(false_type{}, __hint._M_cur, __k, __code,
+			   __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index f5809c7443a..6bb59e2be3a 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -716,7 +716,7 @@ namespace __detail
 	std::tuple<>()
       };
       auto __pos
-	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	= __h->_M_insert_node(true_type{}, __bkt, __code, __node._M_node);
       __node._M_node = nullptr;
       return __pos->second;
     }
@@ -743,7 +743,7 @@ namespace __detail
 	std::tuple<>()
       };
       auto __pos
-	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	= __h->_M_insert_node(true_type{}, __bkt, __code, __node._M_node);
       __node._M_node = nullptr;
       return __pos->second;
     }

Patch

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index e2e3f016a35..307865b96bf 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -197,6 +197,7 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
       using __hash_cached = typename __traits_type::__hash_cached;
       using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>;
       using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
+      using __node_sp_t = __detail::_NodeSmartPointer<__node_alloc_type>;
 
       using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>;
 
@@ -669,18 +670,19 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes
-      // no element with its key already present). Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node with key __k and hash code __code, in bucket __bkt if no
+      // rehash (assumes no element with its key already present).
+      template<typename _NodeAccessor>
 	iterator
-      _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			    __node_type* __n, size_type __n_elt = 1);
+	_M_insert_unique_node(const key_type& __k, size_type __bkt,
+			      __hash_code __code, const _NodeAccessor&,
+			      size_type __n_elt = 1);
 
-      // Insert node with hash code __code. Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node with hash code __code.
+      template<typename _NodeAccessor>
 	iterator
-      _M_insert_multi_node(__node_type* __hint,
-			   __hash_code __code, __node_type* __n);
+	_M_insert_multi_node(__node_type* __hint, __hash_code __code,
+			     const _NodeAccessor& __node_accessor);
 
       template<typename... _Args>
 	std::pair<iterator, bool>
@@ -805,9 +807,9 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	      }
 	    else
 	      {
-		__ret.position
-		  = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);
-		__nh._M_ptr = nullptr;
+		__ret.position = _M_insert_unique_node(__k, __bkt, __code,
+				[&__nh]()
+				{ return std::exchange(__nh._M_ptr, nullptr); });
 		__ret.inserted = true;
 	      }
 	  }
@@ -818,33 +820,23 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
       iterator
       _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)
       {
-	iterator __ret;
 	if (__nh.empty())
-	  __ret = end();
-	else
-	  {
+	  return end();
+
 	__glibcxx_assert(get_allocator() == __nh.get_allocator());
 
 	auto __code = this->_M_hash_code(__nh._M_key());
-	    auto __node = std::exchange(__nh._M_ptr, nullptr);
-	    // FIXME: this deallocates the node on exception.
-	    __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);
-	  }
-	return __ret;
+	return _M_insert_multi_node(__hint._M_cur, __code,
+			      [&__nh]()
+			      { return std::exchange(__nh._M_ptr, nullptr); });
       }
 
+    private:
       /// Extract a node.
       node_type
-      extract(const_iterator __pos)
+      _M_extract_node(size_t __bkt, __node_base* __prev_n)
       {
-	__node_type* __n = __pos._M_cur;
-	size_t __bkt = _M_bucket_index(__n);
-
-	// Look for previous node to unlink it from the erased one, this
-	// is why we need buckets to contain the before begin to make
-	// this search fast.
-	__node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-
+	__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
 	if (__prev_n == _M_buckets[__bkt])
 	  _M_remove_bucket_begin(__bkt, __n->_M_next(),
 	     __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
@@ -861,14 +853,24 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	return { __n, this->_M_node_allocator() };
       }
 
+    public:
+      /// Extract a node.
+      node_type
+      extract(const_iterator __pos)
+      {
+	size_t __bkt = _M_bucket_index(__pos._M_cur);
+	return _M_extract_node(__bkt, _M_get_previous_node(__bkt, __pos._M_cur));
+      }
+
       /// Extract a node.
       node_type
       extract(const _Key& __k)
       {
 	node_type __nh;
-	auto __pos = find(__k);
-	if (__pos != end())
-	  __nh = extract(const_iterator(__pos));
+	__hash_code __code = this->_M_hash_code(__k);
+	std::size_t __bkt = _M_bucket_index(__k, __code);
+	if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, __code))
+	  __nh = _M_extract_node(__bkt, __prev_node);
 	return __nh;
       }
 
@@ -885,14 +887,16 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)
 	    {
 	      auto __pos = __i++;
-	      const key_type& __k = this->_M_extract()(__pos._M_cur->_M_v());
+	      const key_type& __k = this->_M_extract()(*__pos);
 	      __hash_code __code = this->_M_hash_code(__k);
 	      size_type __bkt = _M_bucket_index(__k, __code);
 	      if (_M_find_node(__bkt, __k, __code) == nullptr)
 		{
 		  auto __nh = __src.extract(__pos);
-		  _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);
-		  __nh._M_ptr = nullptr;
+		  _M_insert_unique_node(__k, __bkt, __code,
+				[&__nh]()
+				{ return std::exchange(__nh._M_ptr, nullptr); },
+				__n_elt);
 		  __n_elt = 1;
 		}
 	      else if (__n_elt != 1)
@@ -1634,31 +1638,25 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> pair<iterator, bool>
       {
 	// First build the node to get access to the hash code
-	__node_type* __node
-	  = this->_M_allocate_node(std::forward<_Args>(__args)...);
-	const key_type& __k = this->_M_extract()(__node->_M_v());
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(__k);
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
+	__node_sp_t __node_sp(*this,
+		      this->_M_allocate_node(std::forward<_Args>(__args)...));
+	const key_type& __k
+	  = this->_M_extract()(__node_sp._M_get()->_M_v());
+	__hash_code __code = this->_M_hash_code(__k);
 
 	size_type __bkt = _M_bucket_index(__k, __code);
-	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  {
+	if (__node_type* __node = _M_find_node(__bkt, __k, __code))
 	  // There is already an equivalent node, no insertion
-	    this->_M_deallocate_node(__node);
-	    return std::make_pair(iterator(__p), false);
-	  }
+	  return { iterator(__node), false };
 
 	// Insert the node
-	return std::make_pair(_M_insert_unique_node(__bkt, __code, __node),
-			      true);
+	return
+	  {
+	    _M_insert_unique_node(__k, __bkt, __code,
+				  [&__node_sp]()
+				  { return __node_sp._M_release(); }),
+	    true
+	  };
       }
 
   template<typename _Key, typename _Value,
@@ -1673,32 +1671,28 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> iterator
       {
 	// First build the node to get its hash code.
-	__node_type* __node =
-	  this->_M_allocate_node(std::forward<_Args>(__args)...);
+	__node_sp_t __node_sp(*this,
+		      this->_M_allocate_node(std::forward<_Args>(__args)...));
 
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(this->_M_extract()(__node->_M_v()));
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
-
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	const key_type& __k
+	  = this->_M_extract()(__node_sp._M_get()->_M_v());
+	return _M_insert_multi_node(__hint._M_cur, this->_M_hash_code(__k),
+				    [&__node_sp]()
+				    { return __node_sp._M_release(); });
       }
 
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
 	   typename _Traits>
+    template<typename _NodeAccessor>
       auto
       _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 		 _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			  __node_type* __node, size_type __n_elt)
+      _M_insert_unique_node(const key_type& __k, size_type __bkt,
+			    __hash_code __code,
+			    const _NodeAccessor& __node_accessor,
+			    size_type __n_elt)
       -> iterator
       {
 	const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1706,15 +1700,13 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  = _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count,
 					    __n_elt);
 
-      __try
-	{
 	if (__do_rehash.first)
 	  {
 	    _M_rehash(__do_rehash.second, __saved_state);
-	      __bkt
-		= _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
+	    __bkt = _M_bucket_index(__k, __code);
 	  }
 
+	__node_type* __node = __node_accessor();
 	this->_M_store_code(__node, __code);
 
 	// Always insert at the beginning of the bucket.
@@ -1722,12 +1714,6 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	++_M_element_count;
 	return iterator(__node);
       }
-      __catch(...)
-	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
-	}
-    }
 
   // Insert node, in bucket bkt if no rehash (assumes no element with its key
   // already present). Take ownership of the node, deallocate it on exception.
@@ -1735,22 +1721,22 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
 	   typename _Traits>
+    template<typename _NodeAccessor>
       auto
       _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 		 _H1, _H2, _Hash, _RehashPolicy, _Traits>::
       _M_insert_multi_node(__node_type* __hint, __hash_code __code,
-			 __node_type* __node)
+			   const _NodeAccessor& __node_accessor)
       -> iterator
       {
 	const __rehash_state& __saved_state = _M_rehash_policy._M_state();
-      std::pair<bool, std::size_t> __do_rehash
-	= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count, 1);
+	std::pair<bool, std::size_t> __do_rehash =
+	  _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count, 1);
 
-      __try
-	{
 	if (__do_rehash.first)
 	  _M_rehash(__do_rehash.second, __saved_state);
 
+	__node_type* __node = __node_accessor();
 	this->_M_store_code(__node, __code);
 	const key_type& __k = this->_M_extract()(__node->_M_v());
 	size_type __bkt = _M_bucket_index(__k, __code);
@@ -1779,20 +1765,13 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		}
 	  }
 	else
-	    // The inserted node has no equivalent in the
-	    // hashtable. We must insert the new node at the
-	    // beginning of the bucket to preserve equivalent
-	    // elements' relative positions.
+	  // The inserted node has no equivalent in the hashtable. We must
+	  // insert the new node at the beginning of the bucket to preserve
+	  // equivalent elements' relative positions.
 	  _M_insert_bucket_begin(__bkt, __node);
 	++_M_element_count;
 	return iterator(__node);
       }
-      __catch(...)
-	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
-	}
-    }
 
   // Insert v if no element with its key is already present.
   template<typename _Key, typename _Value,
@@ -1811,12 +1790,18 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
 
-	__node_type* __n = _M_find_node(__bkt, __k, __code);
-	if (__n)
-	  return std::make_pair(iterator(__n), false);
+	if (__node_type* __node = _M_find_node(__bkt, __k, __code))
+	  return { iterator(__node), false };
 
-	__n = __node_gen(std::forward<_Arg>(__v));
-	return { _M_insert_unique_node(__bkt, __code, __n, __n_elt), true };
+	__node_sp_t __node_sp(*this, __node_gen(std::forward<_Arg>(__v)));
+	return
+	  {
+	    _M_insert_unique_node(__k, __bkt, __code,
+				  [&__node_sp]()
+				  { return __node_sp._M_release(); },
+				  __n_elt),
+	    true
+	  };
       }
 
   // Insert v unconditionally.
@@ -1837,9 +1822,11 @@  _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(this->_M_extract()(__v));
 
 	// Second allocate new node so that we don't rehash if it throws.
-	__node_type* __node = __node_gen(std::forward<_Arg>(__v));
+	__node_sp_t __node_sp(*this, __node_gen(std::forward<_Arg>(__v)));
 
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	return _M_insert_multi_node(__hint._M_cur, __code,
+				    [&__node_sp]()
+				    { return __node_sp._M_release(); });
       }
 
   template<typename _Key, typename _Value,
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 878154ae210..ddcff0ec9d0 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -170,6 +170,40 @@  namespace __detail
       __hashtable_alloc& _M_h;
     };
 
+  // Node smart pointer to benefit from RAII.
+  template<typename _NodeAlloc>
+    struct _NodeSmartPointer
+    {
+    private:
+      using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;
+      using __node_type = typename __hashtable_alloc::__node_type;
+
+    public:
+      _NodeSmartPointer(__hashtable_alloc& __h, __node_type* __node) noexcept
+      : _M_h(__h), _M_node(__node) { }
+
+      ~_NodeSmartPointer()
+      {
+	if (_M_node)
+	  _M_h._M_deallocate_node(_M_node);
+      }
+
+      __node_type*
+      _M_get() const { return _M_node; }
+
+      __node_type*
+      _M_release() noexcept
+      {
+	__node_type* __tmp = _M_node;
+	_M_node = nullptr;
+	return __tmp;
+      }
+
+    private:
+      __hashtable_alloc& _M_h;
+      __node_type* _M_node;
+    };
+
   // Auxiliary types used for all instantiations of _Hashtable nodes
   // and iterators.
 
@@ -706,17 +740,18 @@  namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
-				      std::tuple<const key_type&>(__k),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      return __p->_M_v().second;
+      using __node_sp_t = typename __hashtable::__node_sp_t;
+      __node_sp_t __node_sp(*__h,
+		__h->_M_allocate_node(std::piecewise_construct,
+				      std::tuple<const key_type&>(__k),
+				      std::tuple<>()));
+      return __h->_M_insert_unique_node(__k, __bkt, __code,
+				[&__node_sp]()
+				{ return __node_sp._M_release(); })->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -731,17 +766,20 @@  namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
-				      std::forward_as_tuple(std::move(__k)),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      return __p->_M_v().second;
+      using __node_sp_t = typename __hashtable::__node_sp_t;
+      __node_sp_t __node_sp(*__h,
+		__h->_M_allocate_node(std::piecewise_construct,
+				      std::forward_as_tuple(std::move(__k)),
+				      std::tuple<>()));
+      return __h->_M_insert_unique_node(
+			__h->_M_extract()(__node_sp._M_get()->_M_v()),
+			__bkt, __code,
+			[&__node_sp]()
+			{ return __node_sp._M_release(); })->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,