gnu-misc-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Why are software patents wrong?


From: Robert Maas, see http://tinyurl.com/uh3t
Subject: Re: Why are software patents wrong?
Date: Tue, 09 Aug 2005 18:20:28 -0700

> From: r.e.ball...@usa.net (Rex Ballard)
> Eventually, in 1995, changes were made which made it easier to apply
> for patents and made it easier to grant patents on software.
> Unfortunately, this meant that there was no prior record of software
> technology from 1940 to 1995, upon which to search for prior art.  As
> a result, millions of patents have been granted for prior art.

They have done a stupid thing. Lots of algorithms are fully documented
in Communications of the ACM, which is a reputable journal they ought
to use as a reference. Lots of other algorithms are available in source
form via open source projects and JavaScript pages etc. Lots of other
algorithms are functionally available via server-side software, and
their authors might be convinced to show the algorithms to the patent
court in confidentially, in return for preventing the issuance of a
patent that would retroactively make their server-side software
illegal. Then of course there are books such as Knuth's which have many
detailed algorithms described and documented, and API documentation for
Java C Lisp etc. which have many other algorithms described. Finally,
lots of computer software experts are available to express an opinion
whether a given algorithm is obvious from prior art or truly
novel/original, and such experts could search the InterNet to find
evidence of such prior art. An online contest could be held to see how
quickly somebody can duplidate an invention, given only a statement of
what's desired to accomplish, no statement of how to achieve that. If
several people can each come up with a solution to the task within a
week, then obviously the algorithm under consideration isn't patentable
unless it differes from all the various submitted solutions in some
signfiicant way such as being more efficient, and in that case the
claims by the inventor may have to be reduced to include only the
aspects that are unique to the new invention, not anything in common
with any of the single-week solutions.

> not having something patented before you start shipping it is an
> engraved invitation for these "shark" lawyers to rack up huge legal
> fees.

That would seem to be yet another argument in favor of server-side
software, where nobody can discover what algorithm is being used to
accomplish the user effect.

> Of course, these types of patent applications also have the benefit of
> including volumes of "prior art" into the public record, especially
> the patent office archives.  Typically, someone seeking a defensive
> software patent will cite precedents going back 20, 30, even 60 years,
> as a way to make sure that any attempt to sue or claim "prior art" is
> thwarted by further claims of prior art.

Hmm, sounds like mostly a good thing despite the patent itself being
wrong, but then:

> A good defensive patent can often be nullified by it's own "prior
> art" citations, which is also an acceptable outcome when the patent
> owner is the defendent.

Aha, so at that point the defensive patent has nothing bad at all, just
a lot of research into prior art put into the official public record
known to the patent office, but no patent actually effective, only
protection against anyone else trying to get a patent on the same
software thread. A few thousand cases like this and perhaps nearly all
prior art in software would be in the official public record, and old
algorithms masquerading as new and thereby getting undeserved patents
would be nigh impossible?

> The problem we have seen more recently is the suddent emergence of
> "software patent specialists", lawyers who will help someone file a
> patent, often ignoring ALL prior art not in the patent office archive.
> They are granted the patent, then they proceed to attempt to file
> lawsuits against companies with deep pockets.

So then big companies have additional incentive to avoid this problem
by being really sure to include *all* their software under defensive
patents, right?

> In practice, about 99.99% of all software isn't really patentable.
> With the availability of public specifications and standards, or just
> simple observation, the average college freshman in computer science
> can implement much of the so-called "patented" software.  The
> implementation may be different, and might even be significantly
> different, but implements the same claims.

I'm confused by what you wrote there. It was my impression that the
claims as to what an invention can accomplish, if that's in fact
something an ordinary person thinking about the generally technology
might expect from it, are irrelevant to the guts of the patent which
are claims as to *how* it accomplishes it. For example, if somebody
invented a way to store data reliably, which used magnetic cores as a
method, and somebody else invented a way to store data reliably, which
used bistable mechanical relays as a method, that both patents would be
simultaneously valid because even though they accomplish the same
objective they do it by different means. Only if the overall goal of an
invention is something totally novel and non-obvious, would the goal in
addition to the means be patentable. Is my general understanding
correct?

If so, then all the algorithms produced by the first-year student, and
anything essentially similar, would be non-patentable, but a totally
novel algorithm for the same objective/goal would be patentable.
For example, if you start with an unordered array of data, and you wish
to sort it into ascending sequence per some key within the
data or computed from the data, you would like to:
- Not use any extra memory.
- Have guaranteed runtime bounded by n * log(n)
- Have the algorithm be "stable", i.e. records with equivalent keys
   remain in the same relative sequence after sorting as before.
All known methods violate at least one of the two. (Merge-sort is
stable and nlogn but takes more memory temporarily during the sorting.
Heapsort is nlogn and takes no extra memory but isn't stable. Many
other algorithms are stable and take no extra memory but have worst
case longer than nlogn, including that notorious QuickSort.) Has it
been proven that satisfying all three simultaneously is impossible? If
not, maybe someday somebody will achieve the all-three goal and get a
patent on that wondrous algorithm, a truly deserved software patent.

By the way, I had trouble remembering the name "QuickSort" and had to
look it up in Google. In the process I found this horrid Web site:
  http://linux.wku.edu/~lamonml/algor/sort/merge.html
   Pros: Marginally faster than the heap sort for larger sets.
   Cons: At least twice the memory requirements of the other sorts;
   recursive.

Wrong: Big advantage over HeapSort in that it is stable!
Wrong: All you need for MergeSort is a linked list, which for large
records takes only a tiny fraction of extra memory. (If you have an
array of records already, then the linked list can point directly into
elements of the array, so you don't need a separate copy of the data in
the linked list.)

   Like the quick sort, the merge sort is recursive which can make it a
   bad choice for applications that run on machines with limited memory.

Wrong! The following MergeSort algorithm is non-recursive:
(1) Build a queue (first-in first-out) containing single-element lists
where the single element of each queue element is (pointer to) the
corresponding record in the array.
(2) While number of elements in queue is more than 1, merge first two
elements and push merged result back onto end of queue.
(Z) At this point we can use the one remaining linked list as a virtual
sort of the array.
(3) If we really must rearrange all the physical records into ascending
sequence, all we have to do is decompose the permutation into cycles,
and perform a rotate of records indexed by each individual cycle, which
requires a single record of extra storage and only o(n) extra compute
time.

Note that if the records are not all the same size, and are packed end
to end in a big block of memory with delimiters between them instead of
padded with blanks to enlarge them to all the same size, and if it's
allowed to use an index table to virtually rearrange them, or
alternately they are implemented as a "ragged array" in the first
place, then the records never have to be moved around at all, only the
index table needs to be rearranged, and all the sorting algorithms use
about the same amount of extra memory, making MergeSort the winner all
around.

I wish there was a way to get such bad Web sites censured until they
fixed their major mistakes.

Back to your article I'm responding to:
> Many of these claims are very broad and general, which is something
> that lawyers  love to do in patent filings.  And when someone
> implements a piece of software that performs the same function - no
> matter how different the implementation, the patent lawyer can file a
> lawsuit based on the claim.

That seems totally wrong, contrary to the purpose of a patent, which is
to protect a novel means to an end, not protect the end itself and all
*other* means towards that end. Are the patent examiners violating the
law, or has the law been mis-written in this way?

> The real creativity comes when the case is heard before a jury.  In
> some cases, when there is a home court advantage, when the jury has no
> knowledge of software and wouldn't know a "for loop" if they saw it,
> the jury can do very strange things.

It is terribly unfortunate that the average person knows virtually
nothing about algorithms (or science either). Algorithms (and science)
should be required study in all elementary school, right after reading,
writing, arithmetic, and government (maybe even before government).

> the judge may reject thousands of pages of usenet archives, because
> the original authors couldn't be verified and/or cross-examined.

It would seem that with a public appeal posted to the net, and some
financial reward by the party that needs the evidence, a few percent of
those authors would surface, which would be enough to defeat a bogus
patent.

> There is even the possibility that public domain software archives
> would not be admissable because the original author's identity could
> not be verified.

Why should that matter? The fact that the software has existed for a
long time should be sufficient to demonstrate that the algorithm isn't
anything new.

> Remember, in most of these cases, the plaintiff's lawyers are doing
> everything they can to prevent disclosures which would demonstrate the
> existence of prior art, therefore making the device unpatentable.

So the problem seems to be that only one side, the side wanting the
patent granted, is represented in court? Why not call in expert
witnesses, who can find examples of the algorithm already in use
commerically, and bring in representatives of those companies with
something to lose if they were denied use of their own software, and
those companies can then put up some money to locate witnesses to
defeat the patent?

> Before a patent can be granted, a patent search must be conducted.
> This search covers all of the prior art that might be similar to the
> patent being applied for.  The patent has to distinguish itself from
> these prior patents in a non-trivial way.

And those seeking a patent (except the defensive patents you mentionned
where they pull out prior art going back up to 60 years) don't really
perform such a search, they only pretend to, right? They should be
sanctionned by the court upon discovering how much they deliberately
overlooked.

> You have 60 years of prior art, and millions of programmers and
> software developers ranging from students and hobbiests to full-time
> professionals with PhD degrees.  But none of their contributions exist
> in the archives being searched during the application for a patent.

Not because such prior art is really hard to find, rather because the
patent applicants deliberately restrict their search to what doesn't
include any prior art, deliberately avoid looking in the obvious places
for such prior art, such as CommACM etc. that I mentionned earlier, right?

> software patents are such a terrible idea.

I agree.

> The big problem now is that you have about 3 billion lines of code,
> including much of it Open Source, which is not being checked as "prior
> art" before granting a patent.

Any chance of getting together a consortium of experts to put together
a master database of software prior art and a WebSite search engine,
thereby making it trivial for anyone to pull up prior art relevant to
any alleged novel invention?

> It's the biggest "property grab" [**] the United States Cavalry drove
> the Indians from the plains and the southwest onto reservations in the
> deserts of Arizona, New Mexico, and Colorado.

Amusing comparison, but missing word "since" where I marked it, right?

Does anybody have an estimate what fraction of software prior art has
already been incorporated into official USPO records, thanks mostly to
defensive patents, and how fast this fraction is increasing?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]