[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: What to do for faster `remove-duplicates'?

From: Thierry Volpiatto
Subject: Re: What to do for faster `remove-duplicates'?
Date: Wed, 06 May 2015 20:48:02 +0200

Artur Malabarba <address@hidden> writes:

>>> Looks good, please install.
>> Not so good as now it is no more destructive for a seq > 100.
> Just pushed the following:
> modified   lisp/subr.el
> @@ -424,12 +424,12 @@ one is kept."
>            (unless (gethash elt hash)
>              (puthash elt elt hash)
>              (push elt res)))
> -        (nreverse res))
> +        (setcdr list (cdr (nreverse res))))
>      (let ((tail list))
>        (while tail
>          (setcdr tail (delete (car tail) (cdr tail)))
> -        (setq tail (cdr tail))))
> -    list))
> +        (setq tail (cdr tail)))))
> +  list)

Also I am not sure pushing to a list (res) and returning this list at
end is faster than returning the maphash.
At first it looks faster but the time spent consing+gc'ing seems longer
than just returning the maphash.

But I may be wrong, just a thought.

Get my Gnupg key:
gpg --keyserver pgp.mit.edu --recv-keys 59F29997 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]