Wednesday, August 31, 2022

The NIST Process for Post-Quantum Cryptography

Guest post by Jonathan Katz

Over the past few months there have been several interesting developments in the NIST post-quantum standardization process.

By way of background, since the advent of Shor's algorithm in 1994 we have known that a large-scale, general-purpose quantum computer would be able to break all currently deployed public-key cryptography in (quantum) polynomial time. While estimates vary as to when (or even whether!) quantum computers will become a realistic threat to existing public-key cryptosystems, it seems prudent to already begin developing/deploying newer "post-quantum" schemes that are plausibly secure against quantum computers.

With the above in mind, NIST initiated an open process in 2017 for designing post-quantum cryptographic standards. Researchers from around the world submitted candidate algorithms for public-key encryption/key exchange and digital signatures. These were winnowed down over a series of rounds as cryptographers publicly debated the relative merits of different proposals, or showed security weaknesses in some candidates.

On July 5 of this year, NIST announced that it had selected four of the submissions as finalists for standardization. Only one candidate for public-key encryption was chosen, along with three digital signature schemes. Three of the four selected algorithms rely on the hardness of lattice problems; the only non-lattice scheme is a hash-based signature scheme. (It is possible to build digital signatures using "symmetric-key" assumptions alone.) In addition, four other public-key encryption schemes not based on lattices were designated for further study and possible standardization at a later point in time.

Less than one month later, Castryck and Decru announced a classical attack on SIKE, one of the public-key encryption schemes chosen for further study. SIKE is based on a conjectured hard problem related to isogenies on supersingular elliptic curves. The attack was not just theoretical; the researchers were able to implement the attack and run it in less than a day or less, depending on the security level being considered. Details of the attack are quite complex, but Galbraith gives a high-level overview. Subsequent improvements to the attack followed.

It is worth adding that the above follows an entirely classical attack shown roughly six months earlier on Rainbow, another submission to the NIST standardization process that made it to the 3rd round. (Rainbow is a signature scheme that relies on an entirely different mathematical problem than SIKE.) For completeness, note that none of the four finalists are impacted by any of these attacks.

A few reflections on the above:
  • It is amazing that the factoring and RSA problems are still hard (for classical computers), more than 40 used after they were proposed for cryptography. The same goes for the discrete-logarithm problem (in certain groups).
  • It is not easy to find other hard mathematical problems! Code-based cryptography has been around about as long as factoring, but has been somewhat unpopular for reasons of efficiency. Lattice-based cryptosystems still seem to give the leading candidates.
  • We need more (non-cryptographers) studying cryptographic assumptions. The attacks on SIKE involved deep mathematics; attacks on lattice problems may involve algorithmic ideas that cryptographers haven't thought of.

Wednesday, August 24, 2022

Computational Intractability: A Guide to Algorithmic Lower Bounds. First draft available! Comments Welcome!

(This post is written by Erik Demaine, William Gasarch, and Mohammad Hajiaghayi)

In 1979 Garey and Johnson published the classic

        Computers and  Intractability: A Guide to NP-Completeness

There has been A LOT of work on lower bounds since then.

Topics that were unknown in 1979 include

Parameterized Complexity,

 Lower bounds on approximation,

Other hardness assumptions (ETH, 3SUM-conjecture, APSP-conjecture, UGC, Others), 

Online Algorithms,

Streaming Algorithms, 

Polynomial Parity Arguments, 

Parallelism, and 

Many new problems have been shown complete or hard in NP, PSPACE, and other classes.


Hence a sequel is needed. While it is impossible for one book to encompass all, or even a large fraction, of the work since then, we are proud to announce a book that covers some of that material:

Computational Intractability: A Guide to Algorithmic Lower Bounds

by Erik Demaine, William Gasarch, and Mohammad Hajiaghayi. MIT Press. 2024

See HERE for a link to a first draft.

We welcome corrections, suggestions and comments on the book. Either leave a comment on this blog post or emailing us at [email protected]



Monday, August 22, 2022

20 Years of the Computational Complexity Weblog

Birthday Cake

first posted on this blog twenty years ago today, still the oldest and longest running weblog in theoretical computer science, possibly in all of computer science. In those twenty years we've had nearly 3000 posts and over 23,000 comments and 10,000,000 page views. Bill Gasarch joined me officially as a co-blogger over 15 years ago on March 30, 2007. 

We've seen major results in computational complexity but the biggest challenges remain, in particular major separations of complexity classes. We've also had a front row seat to a computing revolution with  the growth of cloud and mobile computing, social networks connecting us for better or for worse, and incredible successes of machine learning. It's almost as though we've been handed an oracle that gives us much of the goodness of P = NP while leaving cryptography intact. 

What will the next twenty years bring? We'll be watching, posting and tweeting. Hope you keep reading and commenting. 

Thursday, August 18, 2022

Conference Modality

We have had an almost normal summer conference season, for some sense of normal. At one of those conferences I participated in an hybrid conversation about whether the conference should be in-person, virtual or hybrid the following year. Here are some general thoughts.

In-Person

The traditional conference format. People travel from near and far to a hotel, conference center or campus location. Talks given in large rooms, often in parallel. A reception, some social activities, participants gathering in small groups to go out for meals. 

Positives: In-person maximizes interaction between participants. Being physically away from your home means you can focus your time on the conference and your fellow participants. This was more true before the mobile devices/laptops/internet days, but still most participants will spend more time on-site than on-line.

Negatives: Expensive--with registration, hotel and air fare, even a domestic participant could have to pay $2000 or up, higher for those traveling internationally. Visas can be hard to get. Some still feel unsafe in large groups. People often leave early, pity the last speakers. And don't forget the carbon footprint. 

As countries declare war on other countries or states deny certain rights, there is a push against meetings in certain places. Note the disclaimer for next year's FCRC. You might upset some people if you have conferences at these locations (and others if you don't).

Virtual

Virtual conferences would never in the past have been taken seriously but Covid forced our hands. 

Talks are streamed or pre-recorded. Some interaction with chats in talks, zoom get togethers or though a systems like virtual chair. Even if we had a perfect "metaverse" experience where we could get together as though we were in person, not being there in person means we wouldn't make it a priority.

The big advantages are costs are low, it's easy to attend talks, and no danger of spreading disease. Still a virtual conference can feel too much like just a collection of streamed and recorded talks.

Hybrid

So let's make the conference hybrid and have the best of both worlds. Alas, it doesn't work out that way. It's nearly impossible to have good interaction between in-person and virtual participants--basically you have to run two separate meetings. Do you allow virtual talks or require an author to show up in person. 

How do you set prices? Lower prices for virtual increases assess but decreases in-person attendance. Participants (or their advisors) might opt to save expenses and attend virtually instead of in-person, reducing the networking opportunities for everyone. 

Most conferences tend to take the hybrid route to avoid the complaints if one went fully in-person or virtual, but hybrid just pretty much guarantees a mediocre experience for all.

Opinion

My suggestion is some years run the conference virtually and others in hybrid. We already have too many conferences, a byproduct of our field using conferences as the primary publication venue. I suggest following conferences like the International Congress of Mathematicians or the Game Theory World Congress, held every four years. If the main conference of a field is held every four years, researchers, particularly senior researchers, make a bigger effort to be there. You can have the virtual meetings the other years so researchers, particularly students, can continue to present their work.

No easy solutions and CS conferences have not worked well for years. Maybe the struggle to define future conferences will allow us to focus more on the connecting researchers than just "journals that meet in a hotel".

Monday, August 15, 2022

A non-controversial question about the Documents Donald Trump had in his house

This is a non-partisan post. In the interest of disclosure I will divulge that I do not think private citizens should have top secret government documents in their house.

One phrase I kept hearing in the reporting was (I paraphrase and may have the number wrong)

                                  The FBI removed 15 boxes of documents

Documents? Like--- on paper? 

a) Are the documents also online someplace? Perhaps they intentionally are not so that they can't be hacked. 

b) Is having top secret documents only on paper safer than having them in electronic form? Normally I would think so. Donald Trump  having them is a very unusual case. 

c) Having to store all of those documents on paper would seem to have storage problems. I can imagine someone with NO bad purposes making copies and taking them home since they are tired of only being about to read them in a special room. 

d) A problem with having them ONLY on paper is that if an accident happens and they get destroyed there is no backup. Or is there? Are there copies somewhere? That leads to twice the storage problems. 

e) There is a tradeoff of security and convenience. Is having the documents only on paper is an extreme point on the tradeoff, but it may be the right one. It may depend on how important it is to keep the documents secret. 

f) I've heard (apocryphal?) stories about some top secret document also available in public though quite legal sources (e.g., a physics journal that discusses nuclear stuff).  Does the government make to much classified? If so then the problem arises of people not taking the classification seriously and getting careless. I doubt that is what happened here. 

g) The question I am most curious about is why did he take them? For most of his other actions his motivations are clear (e.g., he is pushing STOP THE STEAL since he wants to be president). But for this one its not clear. Unfortunately,  I doubt we'll ever find out. Maybe the answer is in some documents either on paper or electronic. 





Monday, August 08, 2022

The Godfather of Complexity

Juris Hartmanis 1928-2022

On Friday, July 29th, I was in the immigration line at an airport in Mexico. My phone rang with Bill Gasarch on the Caller ID but starting vacation I declined the call. The voicemail gave me the sad news that Juris Hartmanis, the person who founded computational complexity and brought me into it passed away earlier that day. I messaged Bill and told him to write an obit and I'd follow with something personal when I returned.


Hartmanis and Stearns in 1963

In November 1962 Hartmanis, working with Richard Stearns at the GE Labs in Schenectady, determined how to use Turing machines to formalize the basic idea to measure resources, like time and memory, as a function of the problem being solved. Their classic Turing-award winning paper On the Computational Complexity of Algorithms, not only gave this formulation but showed that increasing resources increased the problems one can solve. The photo above, from a post celebrating the 50th anniversary of the paper, shows Hartmanis and Stearns with the main theorem of their paper on the board.

Twenty-one years later, a junior at Cornell University still trying to find his way took undergraduate theory from the man himself. Juris brought the topics to life and I found my passion. At the beginning of the class, he said the highest grade usually went to an undergrad followed by the grad students in the class. I was a counterexample, as I had the second highest grade. Never did find out who beat me out.

In spring of my senior year, 1985, I forgave the traditional senior-slump Wines for graduate complexity with Juris. He focused the course around the isomorphism conjecture he developed with his student Len Berman, which implied P≠NP, and Hartmanis believed using the conjecture might lead to settling P v NP. He offered an automatic A to anyone who could prove the isomorphism conjecture. I guess any other proof of P≠NP only warranted a B?

I would later be obsessed by the isomorphism conjecture as an assistant professor, coming up with not one but two oracles making it true. The isomorphism conjecture didn't end up settling P vs NP, but then again neither did any other approach.

It wasn't just me, there was a reason that many of the great American complexity theorists, including Ryan Williams, Scott Aaronson and my own PhD advisor Michael Sipser, were undergrads at Cornell. Many more were PhD students of Hartmanis.

Juris Hartmanis had a certain gravitas in the community. Maybe it was his age, the way he dressed up, his seminal research in the field, or just that Latvian accent. He founded the CS department at Cornell in the 60s and served as head of the CISE directorate at the National Science Foundation in the 90s. His 60th birthday party at the 3rd Structures in Complexity conference (now the Computational Complexity Conference) was the only time I've seen complexity theorists in ties.

Juris Hartmanis (center) being toasted by Janos Simon

A few of my favorite Hartmanis quotes.
  • "We all know P is different than NP. We just don't know how to prove it." - Still true.
  • "I only make mistakes in the last five minutes of the class." - Sometimes he made a mistake with ten minutes left but only admit it in the last five minutes.
  • "Primality is a problem not yet know to be in P but is hanging on by its fingernails with its grip continuing to loosen each day." - Juris Hartmanis said this in 1986, with primality hanging on for another 16 years.
Thanks Juris for creating the foundations of our field and inspiring so many people, yours truly included, to dedicate ourselves to it.

Much more to read:

Sunday, August 07, 2022

The Held Prize for comb. opt. AND Disc Opt AND Alg AND Complexity theory AND related parts of CS.

 Dan Spielman asked me to blog about the Held Prize. I first present what he send me, and then have some thoughts.


FROM DAN: 

--------------------------------------------------------------------------------------------------

Nominations are now being accepted for the National Academy of Sciences’ 2023 Michael and Sheila Held Prize. The Held Prize honors outstanding, innovative, creative, and influential research in the areas of combinatorial and discrete optimization, or related parts of computer science, such as the design and analysis of algorithms and complexity theory. This $100,000 prize is intended to recognize recent work (defined as published within the last eight years). Additional information, including past recipients, eligibility requirements, and more, can be found at here.

All nominations must be submitted online. Unless otherwise stated, the following materials must be submitted: 

A letter from the nominator describing the candidate's work and why he or she should be selected for the award. No more than three (3) pages.

Curriculum vitae. No more than two (2) pages (similar to CVs included with NSF proposals).

Bibliography listing no more than twelve (12) of the nominee's most significant publications.

Suggested citation. A 50-word summary stating why the nominee should be considered for this award. (Citation

examples)

Two letters of support. Support letters must be written by individuals from institutions outside both the

nominator's and the nominee’s institution. Up to three letters of support are accepted.

Nominations will be accepted through Monday, October 3, 2022. Please help spread the word that the nomination process is underway. 

----------------------------------------------------------------------------------------------------

BILL COMMENTS

1) The scope seems rather broad (Dan confirmed this in private email) in that its Comb Opt AND Discrete Opt OR related fields like algorithms and complexity theory. 

2) The research has to be Outstanding AND Innovative AND creative AND influential. That seems hard to do :-(  If they made it an OR instead of an AND I may ask someone to nominate me for my Muffin Work. It does use 0-1 programming!

3) The past winners are, of course, very impressive. But there is one I want to point out to emphasize that the scope is broad: Amit Sahai won in 2022, and here is what the webpage says about it:

For a leading role in development of cryptographic Software Obfuscation and its applications, starting from initial conception of "Indistinguishability Obfuscation" and culminating in new constructions based upon well-founded cryptographic assumptions. These breakthroughs highlight how computational complexity can enable secrecy while computing in insecure environments.

4) Comb Opt and Discrete Opt seem to be Operations Research. So this inspires the following question:

What are the similarities and differences between Operations Research and Research on Algorithms? 

I tend to think of Operations Research  as being more tied to the real world- but is that true?

5) Not enough 2-letter combinations  for what you want to say: I had to use the term Operations Research instead of the abbreviation OR since I was using OR for or. Oh well. 



Saturday, July 30, 2022

Juris Hartmanis passed away on July 29 at the age of 94

 Juris Hartmanis, one of the founders of Complexity Theory, passed away on July 29 at the age of 94.  The Gödel's Last Letter blog has an obit posted here.  Scott Aaronson has some words and a guest post by Ryan Williams here. When other bloggers post obits I will update this paragraph to point to them. 

Here is one non-blog obit: here.  For an interview with Juris see here.

Hartmanis and Stearns shared the 1993  Turing award for the paper On the Computational Complexity of Algorithms (see here for the paper and see here for his Turing Award Talk). In that paper they define DTIME(T(n)) for Turing Machines. They also proved some theorems about how changes to the model (1-tape, 2-tape, 1-dim, 2-dim others) change the notion of DTIME(T(n))- so they were well aware that the definition was not robust. They also have some theorems about computable numbers. 

We are used to the notion that you can measure how long a computation takes my counting the number of steps on a Turing Machine. Before the Hartmanis-Stearns paper this was not how people thought of things. Knuth (I suspect independently) was looking at what we might now call concrete complexity- how many operations does an algorithm need. Hartmanis-Stearns were beginning what is now called Complexity Theory. 

 Recall that later, the Cook-Levin Theorem used Turing Machines. 

If Hartmanis-Stearns did not start the process of putting complexity on a rigorous mathematical basis how might complexity theory have evolved? It is possible we would not have the Cook-Levin Theorem or the notion of NP. It is possible that we would ASSUME that SAT is hard and use that to get other problems hard, and also do reverse reductions as well to get some problems equivalent to SAT. Indeed- this IS what we do in other parts of theory with assuming the following problems are hard (for various definitions of hard): 3SUM, APSP, Unique Games. And in Crypto Factoring, DL, SVP, and other problems. 

Hartmanis did other things as well. I list some of them that are of interest to me - other people will likely list other things of interest to them. 

0) He had 21 PhD Students, some of them quite prominent. The list of them is here.

1) The Berman-Hartmanis Conjecture: All NP-Complete sets are poly isomorphic. Seems true for all natural NP-complete sets. Still open. This conjecture inspired a lot of work on sparse sets including that if a sparse set S is btt-hard for NP, then P=NP (proven by Ogiwara-Watanabe)

2) The Boolean Hierarchy: we all know what NP is. What about sets that are the difference of two NP sets? What about sets of the form A - (B-C) where A,B,C are all in NP? These form a hierarchy. We of course do not know if the hierarchy is proper, but if it collapse then PH collapses.

3) He helped introduce time-bounded Kolmogorov complexity into complexity theory, see here.

4) He was Lance Fortnow's undergraduate advisor. 

5) There is more but I will stop here.



Sunday, July 24, 2022

100 Best Number Theory books of all Time---except many are not on Number Theory

I was recently emailed this link:


That sounds like a good list to have!  But then I looked at it. 

The issue IS NOT that the books on it are not good. I suspect they all are.

The issue IS that many of the books on the list are not on Number Theory.

DEFINITLY NOT:

A Mathematicians Apology by Hardy

The Universe Speaks in Numbers by Farmelo (looks like Physics)

Category theory in Context by Riehl

A First Course in Mathematical logic and set theory by O'Leary

Astronomical Algorithms by Meeus (Algorithms for Astronomy)

Pocket Book of Integrals and Math Formulas by Tallardia

Entropy and Diversity by Leinster

BORDERLINE:

Too many to name, so I will name categories (Not the type Riehl talks about)

Logic books. Here Number Theory  seems to mean Peano Arithmetic and they are looking at what you can and can't prove in it. 

Combinatorics books:  Indeed, sometimes it is hard to draw the line between Combinatorics and Number Theory, but I still would not put a book on Analytic Combinatorics on a list of top books in Number Theory. 

Discrete Math textbooks: Yes, most discrete math textbooks have some elementary number theory in them, but that does not make them number theory books.

Abstract Algebra, Discrete Harmonic Analysis, other hard math books: These have theorems in Number Theory as an Application.  But they are not books on number theory. 

WHAT OF ALL THIS? 

Lists like this often have several problems

1) The actual object of study is not well defined.

2) The criteria for being good is not well defined.

3) The list is just one person's opinion. If I think the best math-novelty song of all time is William-Rowan-Hamilton (see  here) and the worse one is the Bolzano--Weierstrass rap (see here) that's just my opinion. Even if I was the leading authority on Math Novelty songs and had the largest collection in the world, its still just my opinion. (Another contender for worst math song of all time is here.)

4) Who is the audience for such lists? For the Number Theory Books is the audience ugrad math majors? grad math majors? Number Theorists? This needs to be well defined.

5) The list may tell more about the people making the list then the intrinsic qualify of the objects. This is more common in, say, the ranking of presidents. My favorite is Jimmy Carter since he is the only one with the guts to be sworn in by his nickname Jimmy, unlike  Bill Clinton (sworn in as William Jefferson Clinton- a name only used by his mother when she was mad at him) or Joe Biden (sworn in as Joseph Robinette Biden which I doubt even his mother ever used). My opinion may seem silly, but it reflects my bias towards nicknames, just as the people who rank presidents use their bias. 














Wednesday, July 20, 2022

What is known about that sequence

 In my last post I wrote:


---------------------------

Consider the recurrence


a_1=1

for all n\ge 2, a_n = a_{n-1} + a_{n/2}.

For which M does this recurrence have infinitely many n such that a_n \equiv  0 mod M?


I have written an open problems column on this for SIGACT News which also says
what is known (or at least what I know is known).  It will appear in the next issue.

I will post that open problems column here on my next post.

Until then  I would like you to work on it, untainted by what I know. 
----------------------------------------

I will now say what is known and point to the open problems column, co-authored with Emily Kaplitz and Erik Metz. 

If  M=2 or M=3 or M=5 or M=7 then there are infinitely many n such that a_n \equiv 0 mod M

If M\equiv 0 mod 4 then there are no n such that a_n \equiv 0 mod M

Empirical evidence suggests that

If M \not\equiv 0 mod 4 then there are infinitely many n such that a_n\equiv 0 mod M

That is our conjecture. Any progress would be good- for example proving it for M=9. M=11 might be easier since 11 is prime. 

The article that I submitted is HERE

Monday, July 18, 2022

An open question about a sequence mod M.

In this post n/2 means floor{n/2}

Consider the recurrence


a_1=1

for all n\ge 2, a_n = a_{n-1} + a_{n/2}.

For which M does this recurrence have infinitely many n such that a_n \equiv  0 mod M?


I have written an open problems column on this for SIGACT News which also says
what is known (or at least what I know is known).  It will appear in the next issue.

I will post that open problems column here on my next post.

Until then  I would like you to work on it, untainted by what I know. 

ADDED LATER: I have now posted the sequel which includes a pointer to the open problems column. To save you time, I post it here as well.






Monday, July 11, 2022

Review of The Engines of Cognition: Essays From the LessWrong Forum/Meta question about posts

 A while back I reviewed A Map that Reflects the Territory which is a collection of essays posted on the lesswrong forum. My review is here. I posted it to both this blog and to the lesswrong forum. In both cases I posted a link to it. My post to lesswrong is here

On the lesswrong post many of the comments, plus some private emails, told me NO BILL- don't post a link, post it directly as text. It was not clear how to do that, but I got it done with help.

On complexity blog nobody commented that this was a problem. Then again, nobody commented at all, so its not clear what to make of that. 

So

Meta Question: Is posting a link worse than posting direct text? Note that the book review was 12 pages long and looked great in LaTeX. 

Meta Question: Why did lesswrong care about the format but complexityblog did not (Probably answer: omplexityblog readers did not care at all, whereas Lesswrong cared about what I though about Lesswrong)

Another Question, not Meta. One of the comments was (I paraphrase)

When I open a pdf file I expected to see something in the style of an academic paper. This is written in very much chatty, free-flowing blog post style with jokes like calling neologisms ``newords'', so the whole think felt more off-kilter than was intended. The style of writing would prob work better as an HTML blog post (which could then be posted directly as a Lesswrong post here instead of hosted elsewhere and linked.)

I think its interesting that the format of an article telegraphs (in this case incorrectly) what type of article it will be. Is this a common problem?  I have had the experience of reading a real academic paper and being surprised that some joke or cultural-reference is in it, though I do not object to this. 

Another comment and question

I was surprised the post only had 11 karma when I saw it (William had send me an advance copy and I'd really liked reading it) but when I saw that it was a link post, I understood why.

I find this hilarious- they have some way the posts are rated!  For one, Lance told me very early on to never worry about comments, and I don't. Second, it reminds me of the Black Mirror episode Nosedive.

ANYWAY, I have reviewed another collection of essays for less wrong, this one called The Engines of Cognition. I am posting it here as a link: here  and I will post it on lesswrong as full text (with help) in a few days. 

I am posting it so I can get comments before I submit it to the SIGACT News book review column. But this is odd since I think this blog has more readers than SIGACT news has subscribers, so perhaps THIS is its real debut, not that. And of course the lesswrong forum is a place where more will read it since its about them. 

So- I appreciate comments to make it a better review!





Wednesday, July 06, 2022

The Highland Park Shooting

This week I should be celebrating Mark Braverman's Abacus Medal and the Fields Medalists. Instead my mind has been focused 25 miles north of Chicago.

Mass shootings in the United States have become far too commonplace, but the shooting at a fourth of July parade in Highland Park, Illinois hit home. Literally Highland Park was home for me, from 2003-2012. We've been in downtown Highland Park hundreds of times. We've attended their fourth of July parade in the past. My daughter participated in it as part of the high school marching band. 

We were members of North Shore Congregation Israel. My wife, who had a party planning business back then, worked closely with NSCI events coordinator Jacki Sundhein, tragically killed in the attack.

We lived close to Bob's Deli and Pantry and we'd often walk over there for sandwiches or snacks, sometimes served by Bob Crimo himself. The alleged shooter, Bobby Crimo, was his son.

We spent the fourth with friends who came down from Glencoe, the town just south of Highland Park. We spent much of the day just searching for updates on our phones.

I wish we could find ways to reduce the shootings in Highland Park and those like it, the violence that plagues Chicago and other major cities and the highly polarized world we live in which both hampers real gun reforms and creates online groups that help enable these awful events. But right now I just mourn for the lives lost in the town that was my home, a town that will never fully recover from this tragedy.

Tuesday, June 28, 2022

A Gadget for 3-Colorings

Following up on Bill's post earlier this week on counting the number of 3-colorings, Steven Noble emailed us with some updated information. The first proof that counting 3-colorings is #P-complete is in a 1986 paper by Nati Linial. That proof uses a Turing reduction using polynomials based on posets.

Steven points to a 1994 thesis of James Annan under the direction of Dominic Welsh at Oxford that gives the gadget construction that I so tried and failed to do in Bill's post.

Think of color 0 as false and color 1 as true and use this gadget in place of the OR-gadgets in the regular NP-complete proof of 3-coloring. I checked all eight values of a, b and c and the gadget works as promised.

James Annan later became a climate scientist and co-founded Blue Skies Research in the UK. 

Steven also noted that counting 2-colorings is easy, because for each connected component, there are either 0 or 2 colorings.

Update 4/22/24: We later learned of another gadget construction by Graham Farr in a 1994 Acta Informatica paper

Sunday, June 26, 2022

Counting the Number of 3-colorings of a graph is Sharp-P complete. This should be better known.

(ADDED LATER- Lance and I were emailed more information on the topic of this post, and that was made into a post by Lance which is here.) 


BILL: Lance, is #3COL #P complete? (#3COL is: Given a graph G, return the number of  different 3-colorings it has.) 

LANCE: Surely you know that for all natural A,  #A is #P complete. 

BILL: There is no rigorous way to define natural. (I have several blog posts on this.) 

LANCE: Surely then for all the NP-complete problems in Garey & Johson.

BILL:  I know that. But is there a proof that 3COL is #P Complete? I saw a paper that claimed the standard proof that 3-COL is NPC works, but alas, it does not. And stop calling me Shirley.

LATER

LANCE: I found this cool construction of an OR gate that creates a unique coloring.


LATER

LANCE: That didn't work because you need to take an OR of three variables. OK, this isn't that easy.

LATER

LANCE: I found an unpublished paper (its not even in DBLP) that shows #3-COL is  #P complete using a reduction from NAE-3SAT, see here. The proof was harder than I thought it would be. 

BILL: Great! I'll post about it and give the link, since this should be better known. The link is here.

-----------------------------

This leads to a question I asked about 2 years ago on the blog (see here) so I will be brief and urge you to read that post.

a) Is every natural NPC problem also #P-complete. Surely yes though this statement is impossible to make rigorous. 

b) Is there some well defined class LANCE of LOTS of  NPC problems and a theorem saying that for every A in LANCE,  #A is #P complete? The last time I blogged about this (see above pointer) a comment pointed me to a cs stack exchange here that pointed to an article by Noam Levine, here which has a theorem which is not quite what I want but is interesting. Applying it to 3COL it says that there is a NTM poly time M that accepts 3COL and #M is #P-complete. 
Not just 3COL but many problems. 

c) Is there some reasonable hardness assumption H such that from H one can show there is a set A that is NP-complete that is NOT #P-complete? (The set A will be contrived.) 

ADDED LATER: Is #2-COL known to be #P-complete? This really could go either way (P or #P-complete) since some problems in P have their #-version in P, and some have their #-version be #P-complete.

Sunday, June 19, 2022

Guest post by Prahalad Rajkumar: advice for grad students

I suspect that Lance and/or I have had blogs giving advice to grad students. I won't point to any particular posts since that's a hard thing to search for. However, they were all written WAY AFTER Lance and I actually were grad students. Recently a former grad student at UMCP, Prahalad Rajkumar, emailed me that he wanted to do a post about advice for grad students. Since he has graduated more recently (Master degree in CS, topic was Monte Carlo Techniques, in 2009, then a job as a programmer-analyst) his advice may be better, or at least different, than ours was.

Here is his guest post with an occasional comment by me embedded in it. 

----------------------------

 I Made this Fatal Mistake when I Joined a Graduate Program at Maryland

Getting accepted to a graduate program in a good school is an honor.

It is also an opportunity to do quality work and hone your skills. I made one fatal mistake at the start of my master’s degree at the University of Maryland which took me down a vicious rabbit hole. I believed that I was not cut out for this program.

The Only Person Who Gave an Incorrect Answer

Before the start of my graduate studies, there was an informal gathering held for newer students and some faculty members. A faculty member asked a basic algorithm question.

Everyone in the room gave one answer. I gave another answer.

This is real life and not Good Will Hunting, and of course, I was wrong. I had misunderstood the question. It would have been a simple matter to shrug and move forward. But the paternal voice in my head saw a good opportunity to continue to convince me that I was an imposter who did not belong here.

Who is Smarter than Whom?

Some of my fellow incoming graduate students, who TAed with me for Bill Gasarch’s class, played an innocent looking game.

“That guy is so smart”.

“I wish I were as smart as her”.

They couldn’t know that this would affect me. I too did not know that this could affect me. But it did. I asked myself “Am I smarter than person X?”. Each time, the paternal voice in my head was quick to answer “No”. And each time I took this “No” seriously.

NOTE FROM BILL: Professors also play who is smarter than who game and we shouldn't.

I Didn’t Choose My Classes Wisely

I made a few mistakes in choosing my classes. I chose Concrete Complexity with Bill, which I later realized I had no aptitude for. I chose an undergraduate class taught by a professor whose style did not resonate with me. Mercifully, I chose a third class that I liked and excelled in. A class which did not destroy my confidence.

In retrospect, though I chose a couple of classes that were not my cup of tea, I compounded my problems with the stories I told myself. I had several good options available to me. I could redouble my efforts in the said classes and give it my best shot. I could accept my inevitable “B” grades in these classes, and be mindful to choose better classes in the upcoming semesters.

I, however, did the one thing I should not have done: I further convinced myself that I was not cut out to be a graduate student.

NOTE FROM BILL: Some students wisely ask around to find out who is a good teacher? Prahalad points out that this is just the first question. A class may be appropriate for you or not based on many factors, not just if the instructor is a good teacher. 

I Fell Victim to Impostor Syndrome

I kept compounding my woes in my second and third semesters. Things got bad -- I took up a position as a research assistant in my third semester. My confidence was low -- and I struggled to do basic tasks that fall under my areas of competence.

In my fourth semester, I convinced myself that I could not code. In a class project where I had to do some coding as a part of a group project, I struggled to write a single line of code.

When I confessed this to one of my group members, he got me out of my head. He got me to code my part more than capably. I’ve written about this experience here.


 It Does Not Matter in the Slightest

I wish I could tell the 2007 version of myself the following: It doesn’t matter who is smarter than whom. In any way whatsoever. We are on our individual journeys. In graduate school. In life.

Comparing myself with another person is as productive as playing several hours of angry birds.

 The Admission Committee Believed in Me.

There was one good reason I should have rejected the thought that I did not belong in the program. The admission committee believed that I belonged here. Consisting of several brilliant minds. If they thought I should be here, why should I second guess them?

NOTE FROM BILL: While the Admissions committee DID believe in Prahalad and was CORRECT in this, I would not call the committee brilliant. As is well known, the IQ of a committee is the IQ of its dumbest member divided by the number of people on the committee. 

Bill Gasarch’s Secret Sauce

Since I took a class with Bill and TAed for him, I had occasion to spend a lot of time with Bill.
In one conversation, Bill told me something profound. He told me the secret sauce behind his accomplishments. No, it was not talent. It was his willingness to work as hard as it takes.
And working hard is a superpower which is available to anyone who is inclined to invoke it. I wish I had.

BILL COMMENT: The notion that hard work is important is of course old. I wonder how old. One of the best expressions of this that I read was in a book Myths of Innovation which said (a) Great ideas are over rated, (b) hard work and follow through are underrated. There are more sources on this notion in the next part of Prahalad's post. (Side Note- I got the book at the Borders Books Going Out of Business Sale. Maybe they should have read it.) 


Talent is Overrated.

I read a few books in the last couple of years that discussed the subject of mastery: Mastery by Robert Greene, Peak by Anders Ericsson, Talent is Overrated by Geoff Colvin, The Talent Code by Daniel Coyle, Grit by Angela Duckworth, Mindset by Carol Dweck

There was one point that all of these books made: talent is not the factor which determines a person’s success. Their work ethic, their willingness to do what it takes, and several hours of deliberate practice is the secret of success. Of course, talent plays a part -- you can’t be 5’1 and hope to be better than Michael Jordan. But in the graduate school setting, where a majority are competent, it really is a matter of putting in the effort.

 Follow the Process

Bill Walsh signed up as the coach of the languishing 49ers football team. The title of his bestselling book describes his coaching philosophy: The Score Takes Care of Itself. He established processes. Focusing on the smallest of details. Walsh made everyone in the football team and in the administrative departments follow their respective processes. Long story short: the score took care of itself. The 49ers won 3 super bowls among other impressive performances.

If I had to do it all again: I would get out of my head. And keep going with a disciplined work ethic. Establish a process. Follow the process. And let the results take care of themselves.


All’s Well That Ends Well

I grinded and hustled and successfully completed my Masters degree. However, instead of making the journey a joyride, I got in my own way and complicated things for no good reason.

 Final Thoughts

As William James said, a person can change his life by changing his attitude. All I needed to do was change my thinking -- work hard -- and the “score would have taken care of itself”.

I thought I was alone. But I found out in other spheres that a non-negligent percentage of people fall prey to the impostor syndrome. I wanted to write this to help any student who may be going through the problem that I did. If you are going through self-doubt, my message to you is to get out of the head, believe that you are capable (and make no mistake, you certainly are), do the work diligently,
follow the process, and let the score take care of itself.

Sunday, June 12, 2022

I am surprised that the Shortest Vector Problem is not known to be NP-hard, but perhaps I am wrong


A lattice L in R^n is a discrete subgroup of R^n. 

Let p IN [1,infinty)

The p-norm of a vector x=(x_1,...,x_n) IN R^n is

                                          ||x||_p=(|x_1|^p + ... + |x_n|^p)^{1/p}.

Note that p=2 yields the standard Euclidean distance.

If p=infinity  then ||x||_p=max_{1 LE  i LE n} |x_i|.

Let p IN [1,infinity]

The Shortest Vector Problem in norm p (SVP_p) is as follows:

INPUT A lattice L specified by a basis.

OUTPUT Output the shortest vector in that basis using the p-norm.

I was looking at lower bounds on approximating this problem and just assumed the original problem was NP-hard. Much to my surprise either (a) its not known, or (b) it is known and I missed in in my lit search. I am hoping that comments on this post will either verify (a) or tell me (b) with refs. 

Here is what I found:

Peter van Emde Boas in 1979  showed that SVP_infinity  is NP-hard.   
(See here for a page that has a link to the paper.  I was unable to post the link directly. Its where it says I found the original paper.) He conjectured that for all p GE 1 the problem is NP-hard. 


Miklos Ajtai in 1998 showed that SVP_2 is NP-hard under randomized reductions.  (See here)

There are other results by Subhash Khot in 2005  (see here)  and Divesh Aggarwal et al. in 2021 (see here)  (Also see the references in those two papers.)  about lower bounds on approximation using a variety of assumptions. Aggarwal's paper in unusual in that it shows hardness results for all p except p even; however, this is likely a function of the proof techniques and not of reality. Likely these problems are hard for all p.

But even after all of those great papers it seems that the  the statement:

                For all p IN [1,infinity] SVP_p is NP-hard

is a conjecture, not a theorem. I wonder if van Emde Boas would be surprised. If he reads this blog, maybe I'll find out. If you know him then ask him to comment, or comment yourself. 

SO is that still a conjecture OR have I missed something?

(Oddly enough, my own blog post here (item 5)  indicates SVP_p  is NP-hard; however, 
I have not been able to track down the reference.)

Saturday, June 04, 2022

Does the Social Media Law in Texas affect theory bloggers?

A new law in Texas states that any social media sites that has at least 50 million subscribers a month cannot ban anyone (its more nuanced than that, but that's the drift). 

(I wrote this before the Supreme courts blocked the law, which you can read about here. This is a temporary block so the issue is not settled.) 

Here is an article about the law: here

My random thoughts

1) How can any internet law be local to Texas or to any state? I wonder the same thing about the EU's law about right-to-be-forgotten and other restrictions. 

2) Does the law apply to blogs? If Scott had over 50 million readers... Hold that thought. Imagine if that many people cared about quantum computing, complexity theory,  the Busy Beaver function,  and Scott's political and social views. That would be an awesome world! However, back to the point- if he did have that many readers would he not be allowed to ban anyone?

3) If Lance and I had over 50 million readers... Hold that thought. Imagine if that many people cared about Complexity Theory, Ramsey Theory, Betty White and Bill and Lance's political and social views. Would that be an awesome world? I leave that as an open question. However, back to the point- would they be able to block posts like: 

                      Great Post. Good point about SAT. Click here for a good deal on tuxedos. 

Either the poster thinks that Lance will win a Turing award and wants him to look good for the ceremony, or its a bot. 

4) If Lipton and Regan's GLL blog had over 50 million readers.... Hold that thought. Imagine if that many people cared about Complexity theory, open-mindedness towards P=NP, catching people who cheat at chess, nice things about everyone they mention, and their political and social views. That would be a very polite world! However, back to the point- would they be able to block posts? Block people? 

5) arxiv recently rejected a paper by Doron Zeilberger. This rejection was idiotic, though Doron can argue the case better than I can, so see here for his version of events (one sign  that he can argue better than I can: he does not use any negative terms like idiot.)  Under the moronic Texas law, can arxiv ban Doron for life? (of course, the question arises, do they have at least 50 million subscribers?)

6) Given who is proposing the law its intent is things like you can't kick Donald Trump off Twitter. I  wonder if Parler or 8-chan or Truth-Social which claim to be free-speech sites, but whose origins are on the right, would block  liberals. Or block anyone? I DO NOT KNOW if they do, but I am curious. If anyone knows please post- no speculation or rumor, I only want solid information. 

7) Republicans default position is to not regulate industry. It is not necessarily  a contradiction to support a regulation; however, they would need  a strong argument why this particular case needs regulation when other issues do not. I have not seen such an argument; however, if you have one then leave a comment. (The argument they are doing it  to please their base is not what I mean- I want a fair objective argument.) 







Monday, May 30, 2022

Discussions I wish we were having


1) Democrats think the best way to avoid school shootings (and other problems with guns) is to have regulations on Guns. They have proposed legislation. The Republicans think its a mental health issue. They have proposed legislation for this. NO THEY HAVEN"T. I would respect the its a mental health issue argument if the people saying this  respected it. They do not. See here. Idea: Politico should leak a (false) memo  by Gov Abbott where he says 

We have a serious mental health crisis in Texas which caused the recent event. I am not just saying this to deflect from the gun issue. I have drawn up a bill to fund mental health care, providing more money, for care and for studies. I call on Republicans and Democrats to pass it ASAP.

I wonder- if this false memo was leaked, would he deny it and say 

I didn't write that. I am using mental health only as a way to deflect from the gun issue. How dare they say that I am reasonable and am proposing actual solutions. 

Or would he be forced to follow through?

2) Democrats think Biden won the election. Some Republicans think Trump won the election. One issue was Arizona. So some republicans organized a recount of Arizona. And when they found out that Biden really did win it they said, as the good Popperian scientists they are, we had a falsifiable hypothesis and it was shown to be false, so now we acknowledge the original hypothesis was wrong. NO THEY DIDN"T. They seem to point to the Arizona audit as proof that they were right, even though it proves the opposite. (Same for all the court cases they lost.)

3) At one time I read some books that challenged evolution (Darwin on Trial by Phillip Johnson was one of them). Some of them DID raise some good points about how science is done (I am NOT being sarcastic). Some of them DID raise some questions like the gap in the fossil record and Michael Behe's notion of irreducible complexity.  (In hindsight these were window dressing and not what they cared about.) MY thought at the time was its good to have people view a branch of science with a different viewpoint. Perhaps the scientists at the Discovery Institute will find something interesting. (The Discovery institute is a think tank and one of their interests is Int. Design.) Alas, the ID people seem to spend their time either challenging the teaching of Evolution in school OR doing really bad science. Could intelligent people who think Evolution is not correct look at it in a different way than scientists do, and do good science, or at least raise good questions,  and come up with something interesting? I used to think so. Now I am not so sure.

4) I wish the debate was what to do about global warming and not is global warming happening? Conjecture: there will come a time when environmentalists finally come around to nuclear power being part of the answer. At that point, Republicans will be against Nuclear power just because the Democrats are for it. 

5) I sometimes get email discussions like the following (I will call the emailer Mel for no good reason.)

------------------------------------------------------

MEL: Dr. Gasarch, I have shown that R(5)=45.

BILL: Great! Can you email me your 2-coloring of K_{44} that has no mono K_5?

MEL: You are just being stubborn. Look at my proof!

------------------------------------------------------------

Clyde has asked me what if Mel had a nonconstructive proof?

FINE- then MEL can tell me that. But Mel doesn't know math well enough to make that

kind of argument. Here is the discussion I wish we had

-------------------------------------------

MEL: Dr. Gasarch, I have shown that R(5)=45.

BILL: Great! Can you email me your coloring of K_{44} that has no mono K_5?

MEL: The proof is non-constructive.

BILL: Is it a probabilistic proof? If so then often the prob is not just nonzero but close to 1. Perhaps you could write a program that does the coin flipping and finds the coloring.

MEL: The proof uses the Local Lovasz Lemma so the probe is not close to 1.

BILL: Even so, that can be coded up.

MEL: Yes but... (AND THIS IS AN INTELLIGENT CONVERSATION)

----------------------------------

Maybe Mel really did prove R(5)=44, or maybe not, but the above conversation would lead to

enlightenment. 


Sunday, May 22, 2022

In the 1960's students protested the Vietnam war!/In 1830 students protested... Math?

 I was at SUNY Stonybrook for college 1976-1980. 

I remember one student protest about a change to the calendar that (I think) would have us go home for winter break and then come back for finals.  I don't recall how that turned out. 

However I had heard about the protests in the 1960's over the Vietnam war. Recall that there was a draft back then so college students were directly affected. 

I was reading a book `Everything you know is wrong' which noted that some people thought the first time there were student protests was in the 1960's but this is not true. (Not quite as startling as finding out that a ships captain cannot perform weddings.) 

It pointed to The 1830 Conic Section Rebellion. I quote Wikipedia (full entry is here)

Prior to the introduction of blackboards, Yale students had been allowed to consult diagrams in their textbooks when solving geometry problems pertaining to conic sections – even on exams. When the students were no longer allowed to consult the text, but were instead required to draw their own diagrams on the blackboard, they refused to take the final exam. As a result forty-three of the ninety-six students – among them, Alfred Stille, and Andrew Calhoun, the son of John C. Calhoun (Vice Pres a the time) – were summarily expelled, and Yale authorities warned neighboring universities against admitting them.

Random Thoughts

1) From my 21st century prospective I am on the students side. It seems analogous to allowing students to use calculators-- which I do allow. 

2) From my 21st century prospective the punishment seems unfair. 

3) The notion of a school telling other schools to not admit student- I do not think this would happen now, and might even be illegal (anti-trust).

4) I am assuming  the students wanted to be able to consult their text out of some principle: we want to learn concepts not busy work.  And again, from my prospective I agree that it was busy work. 

5) Since all of my thoughts are from a 21st century prospective, they may be incorrect, or at least not as transferable to the 1830 as I think. (Paradox: My ideas may not be as true as I think. But if I think that...) 

6) I try to avoid giving busy work. When I teach Decidability and Undecidability I NEVER have the students actually write a Turing Machine that does something. In other cases I also try to make sure they never have to do drudge work.  And I might not even be right in the 21st century- some of my colleagues tell me its good for the students to get their hands dirty (perhaps just a little) with real TM to get a feel for the fact that they can do anything. 

7) The only student protests I hear about nowadays are on political issues. Do you know of any student protests on issues of how they are tested or what the course requirements are, or something of that ilk? I can imagine my discrete math students marching with signs that say: 

DOWN WITH INDUCTION!