Thursday, January 31, 2019

Phish Before Turkey

The Chronicle of Higher Education recently published a story Phishing Scheme Targets Professors’ Desire to Please Their Deans — All for $500 in Gift Cards. The same thing happened to me last fall.

Twas the day before Thanksgiving and an email went out to most of the faculty in my department.
From: Lance Fortnow <[email protected]>
Sent: Wednesday, November 21, 2018 1:45 PM
To: [name deleted]
Subject: 
Hello,are you available?
At the time I was in New Jersey visiting family. [email protected] is not my email. I do own [email protected] but don't email there, I rarely check it.

Some faculty checked with me to see if this is real. One faculty called me to see what I wanted. Once I found out what was happening I sent a message to my entire faculty to ignore those emails.

Some faculty did reply to see what I want. The response:
i need you to help me get an Amazon gifts card from the store,i will reimburse you back when i get to the office.
One of our security faculty decided to follow up and replied "Sure! Let me get them for you. Could you provide more more information? e.g., amount and #cards. I can bring them on Monday." The reply:
The amount i want is $100 each in two (2) piece so that will make it a total of $200 l'll be reimbursing back to you.i need physical cards which you are going to get from the store. When you get them,just scratch it and take a picture of them and attach it to the email then send it to me here ok
He went a few more rounds before the phisher just stopped responding.

A week later, a different faculty member came to my office and said I wanted to see him but he's been out of town. I said it was nice to see him but I didn't ask to talk to him and we figured out the confusion was the phishing email.

Someone went through the trouble of creating a fake email address in my name, looking up the email addresses of the faculty in the department and individually emailing each of them, without realizing computer science professors won't fall for a gift card phishing attack. Or at least none of them admitted falling for it.

Friday, January 25, 2019

The Paradigm Shift in FinTech Computation and the need for a Computational Toolkit (Guest Post by Evangelos Georgiadis)

The Paradigm Shift in FinTech Computation and the need for a Computational Toolkit

(Guest Post by Evangelos Georgiadis)

We are experiencing a paradigm shift in finance as we are entering the era of algorithmic FinTech computation. (**And another yet to come. See **Future** below.)  This era is marked by a shift in the role played by the theoretical computer scientist. In the not so distant past, the (financial) economist had the ultimate stamp of approval  for how to study financial models, pricing models, mechanism design, etc. The economist was the ultimate gatekeeper of ideas and models, whereas the main role of the computer scientist was to turn these ideas or models into working code; in a sense, an obedient beaver/engineer. (In finance, the theoretical computer scientist more often than not wears the hat of the quant.)

In today's era, the role of the theoretical computer scientist has been elevated from the obedient engineer to the creative architect not only of models and mechanism designs but also of entire ecosystems. One example is blockchain based ecosystems. In the light of this promotion from obedient engineer to architect, we might need to re-hash the notion of 'sharing blame', as originally and elegantly voiced in On Quants by Professor Daniel W. Stroock, when things go wrong.)

The role change is also coupled by a shift in emphasis of computation that in turn necessitates a deeper understanding of (what this author would refer to as) distributed yet pragmatic complexity based crypto systems' that attempt to redefine 'trust' in terms of distributed computation.

This change necessitates an ability to think in terms of approximation (and lower/upper bounds)  or other good-enough solutions that work on all inputs,  rather than merely easy instances of  problem types that usually lead to clean, exact formulas or solutions.  Additionally, looking through the lens of approximation algorithms enables a different and often more insightful metric for dealing with intrinsically hard problems (for which often no exact or clean solutions exist.) Computer Scientists are trained in this way; however, financial economists are not.   Might the economists actually get in the way?

Our tentative response: The economists are valuable and the solution to the dilemma is to equip them with the right 'computational toolkit'. Ideally, such a toolkit comprises computational tools and algorithms that enable automation of certain computational tasks which otherwise would necessitate more granular understanding at the level of a theoretical computer scientist (or mathematician)
OR be too cumbersome to perform by hand even for the expert.

Essentially, a toolkit even for the theoretical computer scientist that frees her from clerical work and enables computation to scale from clean cases, such as n=1, to pathological (yet far more realistic) cases, such as n=100000, all the way to the advanced and rather important (agnostic case or) symbolic case when n=k -- without much pain or agony.

The existence of such a toolkit would in turn do justice to the definition of FinTech Computation, which entails applying advanced computational techniques not necessarily information techniques) to financial computation. in fact, this author is part of building such an infrastructure solution which
necessitates the underlying programming language [R-E-CAS-T] to have intrinsic hybrid capabilities -- symbolic as well as numeric.

One step towards this  "automation" conquest is shown in A combinatorial-probabilistic analysis of bitcoin attacks with Doron Zeilberger.  The work illustrates an algorithmic risk analysis of the bitcoin protocol via symbolic computation, as opposed to the meticulous, yet more laborious by hand conquest shown by the European duo in Double spend races Heavy usage of the "Wilf-Zeilberger algorithmic proof theory" one of the cornerstones in applied symbolic computation, enabled automated recurrence discovery and algorithmic derivation of higher-order asymptotics. For example, in terms of asymptotics tools: the ability to internalize a very dense body of mathematics, such as the G.D. Birkhoff and W.J. Trjitzinsky method, symbolically, automates the process of computing asymptotics of solutions of recurrence equations; a swiss army knife for any user.

<**Future**>

What does the future entail for FinTech Computation ?

[My two satoshis on this]

Where are we headed in terms of type of computation ?

Blockchain based systems, even though some of us (including this author) have noticed fundamental flaws, seem to still have momentum, at least, judging from recent news articles about companies becoming blockchain technology friendly.  Ranging from (of course) exchanges such as our friends at Binance and BitMEX, we have major smartphone makers such as SamsungHuawei, and HTC. The favorable sentiment towards blockchain technology is shared even amongst top tier U.S. banks.
 Can one deduce success or failure momentum from the citation count distribution of the paper that laid grounds to this technology ? Bitcoin: A Peer-to-Peer Electronic Cash System)

If we look at crypto(currencies), one of many challenges for these blockchain based systems is the high maintenance cost.  Certainly in terms of energy consumption when it comes to the process of mining -- whether Proof-of-Work (PoW) is replaced by Proof-of-Stake (PoS) or some other more energy efficient consensus variant. (This author is aware of various types of optimizations that have been used.)
A few questions that have bugged this author every since ...

a) Is there a natural way to formalize the notion of energy consumption for consensus mechanisms?

b) What about formalizing an energy-efficient mechanism design ?)

(The idea of savings when PoW is replaced by PoS as intended by our friends at the Ethereum Foundation has been around for some time but the point of this author is, the value of 0.99*X (where X is a supernatural number  [a la Don E. Knuth style]), is still a big quantity; too big for an environmentalist ?)

So, what comes next ?

[... the satoshis are still on the table.]

Daniel Kane has brought to my attention that quantum computation -- the seemingly next paradigm shift in which again the role of TCS seem  inextricably interwoven --  may lead to blockchain based systems being replaced by less expensive (at least in terms of energy consumption) quantum based systems. (Crypto might get replaced by Quantum (money). :-)) One such pioneering approach is masterfully articulated by Daniel Kane in "Quantum Money from Modular Forms.

Thursday, January 24, 2019

Machine Learning and Wind Turbines


My daughter Molly spent six weeks on an environmental program in China last summer. When she got back she had to do a report on machine learning and wind turbines used for clean energy generation. What does machine learning have to do with wind turbines? Plenty it turns out and it tell us a lot about the future of programming.

Sudden changes in wind can cause damage to the blades of the turbine. Maintenance is very expensive especially for turbines in the sea and a broken turbine generates no electricity. To catch these changes ahead of time you can mount a Lidar on top of the turbine.


The Lidar can detect wind gusts from about 100 meters ahead, giving about 10 seconds to react. In that time you can rotate the blades, or the whole turbine itself to minimize any damage. Here's a video describing the situation.


How do you do the computations to convert the Lidar data into accurate representations of wind gusts and then how to best adjust for them? You could imagine some complex fluid dynamics computation, which gets even more complex when you several wind turbines in front of each other. Instead you can use the massive amount of data you have collected by sensors on the turbine and the Lidar information and train a neural network. Training takes a long time but a trained network can quickly determine a good course of action. Now neural networks can always make mistakes but unlike self-driving cars, a mistake won't kill anyone, just possibly cause more damage. Since on average you can save considerable maintenance costs, using ML here is a big win.

I've obviously over simplified the above but I really like this example. This is not an ML solution to a standard AI question like image recognition or playing chess. Rather we are using ML to make a difficult computation tractable mostly by using ML on available data and that changes how we think about programming complex tasks.

Sunday, January 20, 2019

ACM prize and some thoughts on the Godel Prize

(ADDED LATER: deadlines for some of these awards have passed but here are ones
coming up soon:

Godel Prize: Feb 15

Knuth Prize: Feb 15

SIGACT News Dist Service: March 1

)

As Lance Tweeted, and I will re-iterate, nominations for the following prizes
are due soon and you can nominate people here

Godel Prize for outstanding paper in TCS. (Godel mentioned P vs NP in a letter to Von Neumann. I've heard it said that its too bad they didn't work on it-- either it would be solved or we'd know its hard. Frankly, I think enough smart people have worked on it that we already know its hard.)

Knuth Prize for outstanding contributions to foundations of Computer Science. (its a greater honor to have a prize  named after you in your lifetime then to win a prize!)

Dijkstra Prize (I wonder if having `ijk' in his name inspired him to work in algorithms)

Kanellakis Theory and Practice Award.

Lawler Award for Humanitarian Contributions within CS and Informatics.

ACM Distinguished Service Award

Danny Lewin Best Student Paper Award (best student paper at STOC)

The Best Paper award (Best paper at STOC, Best paper at FOCS)

(The last two I doubt you can nominate someone for.)

A few thoughts on the Godel Prize:


1) You can win the Godel Prize twice and some people have: Goldwasser, Hastad, Arora, Szegedy, Spielman, Teng. Spielman-Teng have won it as a team twice.

2) GLAD there is no limit to how many can win. If a paper has a lot of people on it (and this has happened) then FINE, they're all winners! According to The Big Bang Theory (the TV show, not the theory) in Physics at most 3 can win a Nobel Prize in Physics for the same breakthrough in a given year. The show itself shows how stupid the policy is.

3) I either knew and forgot or never knew that DPDA Equiv is decidable! Glad to now it just in time for teaching Automata theory this spring.

4) Looking over the list reminded me that there are some papers in the intersection of those I want to read and those I am able to read! Though not many. Most I want to read but they seem hard.

5) The Kanellakis award is for theory that is PRACTICAL. Could someone win a Godel AND a Kannellakis award for the same paper (or set of papers). I found one sort-of case ( (a) below) and one definite case ( (b) below).

a) Moshe and Wolper won the 2000 Godel Prize for Temporal Logic and Finite Automata (I should also read that before my class starts)

Holtzmann, Kurshan, Vardi, and Wolpert won the 2005 Kanellakis prize for Formal Verification Tools.

I assume the two works are related.

b) Freund and Schapire won the 2003 Godel Prize and the 2004 Kanellakis Award, both for their work on boosting in Machine Learning.

6) Why is it the  Godel Prize and the Kanellakis Award? What is the diff between a prize and an award? A quick Google Search says that an Award is a token of effort and merit, while a Prize is something you win in a competition. I doubt that applies. I suspect they are called Prize and Award from historical accident. Does anyone know?

Thursday, January 17, 2019

The Cost of Privacy

Billboard at 2019 CES

Computer scientists tend to obsess about privacy and we've had a privacy/security debate for decades now. But now machine learning has really given us a whole new spin on what privacy protects and takes away.

I take an open approach and basically allow Google to know everything about my life. Google knows where I've been--sometimes my Pixel asks me which store in a shopping center I visited and I give up that info. Google knows who I communicate with, what websites I visit, what music and movies I listen to and watch, all my photos, what temperature makes me comfortable and so on.

What do I get? A Google ecosystem that knows me sometimes better than I know myself. Google works best when it learns and integrates. I get asked to download maps for trips Google knows I'm about to take. I have Google assistant throughout my house, in my phone, in my car and it tailor answers and sometimes even the questions that I need answers to. If anything I wish there was further integration, like Google Voice should ring my office phone only when I'm in the office.

Georgia Tech now forces us to use Microsoft Exchange for email. Outlook is not a bad email program but its capabilities, especially for search, does not work as well and think of all that unused knowledge.

I trust Google to keep my information safe, with a random password and 2-factor encryption and even if someone would manage to break in they would find I'm a pretty boring person with an unhealthy obsession of opera (the musical form not the browser).

Doesn't work for everyone and companies should make it easy to keep your info secure. But I say go use your machine learning on me and find ways to make my life easier and more fun, and sure send me some targeted ads as payment. The Internets will find a way to discover you anyway, might as well take advantage. 

Tuesday, January 15, 2019

do we ever only care about the decision problem? I know of only one case of that

(I had been thinking of this for a post then Lance's post on search versus decision inspired me to write up these thoughts.)

When teaching NP-completeness we often say

The problem we really care about is, for example, given a weighted graph and two vertices s and t, find the optimal way to go from s to t while hitting every node. But its cleaner mathematically to look at the decision problem:

{ (G,s,t,C) : there is a Ham Path from s to t that costs \le C }

The search and decision are poly time equivalent, so its fine to just look at the decision. Indeed- if our interest in in lower bounds then clearly if Decision is hard then Find is Hard.

But here are some questions about search vs decision in general, not just with regard to P vs NP.

1) Is there ever a case where the real world actually cares about the decision version? I can think of just one- given a number is it PRIME is used in Crypto. The real world does not need the witness that its prime (or similar).  They  just want a prime.  Any other cases?

2) How far apart can search and decision be? NP-decision and NP-search they are poly equivalent. In other domains can they be very far apart? For example, is FINDING a k-clique or k-ind set in a graph on 2^{2k} vertices require roughly n^k steps (go through all k-sets) or can we do much better? I suspect this is unknown but would be delighted if a commenter tells me otherwise.


Wednesday, January 09, 2019

Search versus Decision

Shockingly I've never done a post on search versus decision, one of the more interesting dualities in complexity. In short: Decision: Is there a needle in the haystack? Search: Find the needle.

In Satisfiability, or any other NP-complete problem, the two problems are essentially equivalent. If you can decided SAT you can find a solution (good homework problem) or even the best solution. Often people mix up the two, where people say finding the shortest Traveling Salesman Tour is NP-complete, usually without getting into too much trouble.

Decision is always at least as easy as search: If you have a solution you know there is one. What about the other direction? We can't actually prove search is hard without separating P and NP, but we have our conjectures.

Sometimes both are easy. We can easily find the maximum weighted matching.

Sometimes decision is easy and search is supposedly hard: Composite Numbers. The search version is factoring.

Sometimes decision is trivial (i.e. they always exist) and search is still hard. Nash Equilibria. Ramsey Graphs.

Often we ask whether search reduces to decision? If you have some oracle (magic black box) that answered decision questions, can you solve the search problem efficiently? SAT has this property, as does Matching (for trivial reasons). Nash Equilibrium and Composite Numbers likely don't.

Graph Isomorphism does, i.e., given an oracle for graph isomorphism you can find the isomorphism (another good homework problem).

There's also an interesting non-adaptive version. Given a SAT formula can you find an assignment with questions to a SAT oracle that all have to be asked at the same time?

Here we get a probable yes. If the formula has one solution you can find it by asking for each bit of the solution. Randomly you can reduce SAT to several formulas, one of which is likely to have a single assignment that is also an assignment of the original formula. With standard hardness assumptions you can eliminate the randomness.

Is the same true for graph isomorphism? I think that's still open.

Sunday, January 06, 2019

When is a kilogram not a kilogram?

A long long time ago the standards for meter's, kilograms, etc was an actual physical object.

Those days are long gone of course. For example, the meter is defined is the length of the path traveled by light in 1/299,792,458 th of a second. Why such an odd number (can fractions be odd?)? Because they retrofitted it to what that the meter is.  Rather than go to France and compare my stick to the one under a glass case I can just measure the speed of light. Oh. That sounds hard!

It matters a bit since the weight of what was the standard kilogram did increase over time, though of course not by much. When did the measurements for stuff STOP being based on physical objects and was all done based on constants of the universe?

The answer surprised me:

On Nov 16, 2018 (yes, you read that light) they decided that by May 20, 2019, the Kilogram will be defined in terms of Plank's constant. I have not been able to find out how they will use Plank, maybe they don't know yet (they do and its known -- see the first comment) .With that, there are no more standards based on physical objects. Read about it here.

Why did it take so long? I honestly don't know and I am tossing that question out to my readers. You can leave serious or funny answers, and best if I can't tell which is which!




Wednesday, January 02, 2019

Today is Thirdsday! Enjoy it while you can!

Fellow Blogger James Propp has come up with a new Math holiday:

Thirsdsday!

The day is Jan 3 (1-3 in America, though 3-1 in ... Everywhere else?) but only when Jan 3 is a Thursday.

It is a day where we celebrate the magic of the number 1/3.

0) For other math days to celebrate see here

1/3) James Propp's blog about Thirdsday on Monday Dec 31. Really ???   : here

2/3) Evelyn Lamb blogged about Thirdsday on Tuesday Jan 1. Really ??? : here

3/3) Ben Orlin blogged about Thirsdsday on Wedensday Jan 2. Really??? here

(Added ON Thirdsday: Matt Foreman has a video about Thirdsday: here and a blog post here)

 How come I'm the only one blogged  about Thirdsday on Thursday Jan 3 ??? (Added later- not quite true anymore, Matt Foreman also waited until Thirdsday to post on Thirdsday).
I asked Jim Propp about this. He said that he want to help prepare teachers and other eduators for the excitment of Thirdsday! If they already know the wonders of 1/3 they can prepare and lecture on it! Kudos to him! I assume that Evelyn and Ben are similar! Kudos to them! And Ben blogged ON Thirdsday so Kudos to him!

2) Darling asked me `is it a real day like Pi-Day?'  Is Pi-Day real? Is any Holiday real? All holidays are made up until they are accepted and become real. The distinction between real holidays and  made up holidays  is ... nonexistent.  One can talk of accepted and not-accepted holidays.  How long did Pi-day take to be accepted? This is prob not a well defined question.

3) James Propp's and Evelyn Lamb's  blog has many math properties of 1/3.  One educational property: I think it is the first number that students see that is an infinite decimal. My favorite unbiased use of 1/3: The Cantor Set: Uncountable subset of [0,1] that has measure 0. Really!!! My favorite biased use: its important in Muffin Math. If m>s and you want to divide and distribute m muffins to s students, there is always a way to do this with smallest piece at least 1/3. (Usually you can do better but this is sometimes the best you can do.)

4) When will the next Thirdsday come?

2019: Jan 3 is a Thursday, so YES

2020: Jan 3 is a Friday, so NO

2021: Jan 3 is a Sunday (why no Saturday? Leap year. Great- it will come sooner!)  so NO

2022: Jan 3 is a Monday, so NO

2023: Jan 3 is a Tuesday  so NO

2024: Jan 3 is a Wednesday  so NO

2025: Jan 3 is a Friday. WHAT! Why no Thirdsday?  Darn leap year! So NO.

2026: Jan 3 is a Saturday, so NO

2027: Jan 3 is a Sunday so NO

2028: Jan 3 is a Monday so NO

2029: Jan 3 is a Wedensday (Why no Tuesday? Leap year), so NO

2030: Jan 3 is a Thursday (Leap Year helped!), so YES FINALLY!

(Exercise: find a formula: if 2019 was the first Thirdsday, find the year for TD(i), the ith Thirdsday.)

So enjoy Thirdsday in 2019 when spellcheck still flags it.

In 2030 it will be an accepted holiday and spellcheck will think it's fine.










Monday, December 31, 2018

Complexity Year in Review 2018

Result of the year goes to
Oracle Separation of BQP and PH by Ran Raz and Avishay Tal
which we wrote about in June. This work solves one of the original open questions in quantum complexity using tools from both quantum and classical circuit complexity. How often do we see oracle results with popular articles in Quanta (ignore the hyperbolic title), The Hindu and CACM?

Runner up goes to the solution of the 2-to-2 Games Conjecture by Subhash Khot, Dor Minzer and Muli Safra early in 2018. Boaz Barak gave a nice two post overview.

In last year's review we talked about the magical breakthroughs of machine learning. This year we seemed to have moved beyond the magic to where machine learning has become a thing. We see the immense value of data and continue to struggle with the ethical challenges of collecting and acting on data, dominance of the big tech companies, training all these students who want to gain expertise in the area and trying to understand why ML works as well as it does. 

The big X-factor is China. Will competition with China spur science literacy and funding in the US like the cold war with the Soviets did? Or will isolation with China limit scientific collaboration like the cold war with the Soviets did? 

The big tech surprise was the rise of electric scooters. Georgia Tech has embraced them and it is a quick way to get around campus.

Some of the other questions I asked last year didn't have interesting answers: What will the Internet look like post-net neutrality? (too early to tell) How will the new tax code play out? (too early to tell) Where will Amazon put HQ2? (New York and DC--only surprise was picking two cities) What can quantum computers with 50 qbits accomplish? (still a good question) Will bitcoin move to $300K or 30 cents? (it dropped but still has real value)

Thanks to our guest posters Vijay VaziraniSamir Khuller and Robert Kleinberg, and anonymous.


We end the year with craziness, the stock market is going through wild gyrations, we have a partial government shutdown including all of NSF and an uncertain political landscape with different parties leading the two houses of congress. We're still in the midst of a technological revolution and governments around the world try to figure how to regulate it. I find it hard to predict 2019 but it will not be quiet.

Wednesday, December 26, 2018

Ker-I Ko (1950-2018)

A few weeks ago as Bill and I prepared for our upcoming year in review post, we noted that we hadn't lost any theoretical computer scientists this year, at least none that we were aware of. Unfortunately we didn't make it through all of 2018 unscathed.

Computational complexity theorist Ker-I Ko passed away peacefully from lung failure on December 13. Ker-I Ko spent most of his career at Stonybrook where he had recently retired to take on a professorship at National Chiao Tung University in Taiwan.

I had only had a few brief meetings with Ko but I knew his work quite well. In his best known work, Ko, in a solo paper, created an infinite series of oracles A1, A2, … such that relative to Ak, the polynomial-time hierarchy collapses to exactly the kth level, that is Σk-1 ≠ Σk = Σk+1 = PH. Ko wielded the switching lemma like a scalpel, pulling part the k-1st and kth levels while leaving enough room to encode the k+1st level. He actually gives two sets of oracles, one which collapses PH to PSPACE while collapsing the hierarchy to the kth level and one that separates PH from PSPACE. Even his oracle showing P=NP≠PSPACE wasn't trivial and I used it as an example of a hard to settle complexity question.

Ko, with Tim Long and Ding-Zhu Du, showed that if P ≠ UP if and only if there exist two sets that were one-to-one length-increasing polynomial-time reducible to each other but not polynomial-time isomorphic. This paper played a large role in helping us understand the isomorphism conjecture.

Ko, with Pekka Orponen, Uwe Schöning and Osamu Watanabe, used Kolmogorov complexity to measure the complexity of an instance of a problem. The instance complexity of x in a set A is the smallest program that will correctly answer whether x is in A, and will not give an incorrect answer for any other y in A, though it can answer "I don't know" for y ≠ x.

Ko also had several papers on complexity of real-valued functions and wrote several textbooks and manuscripts. A big loss for all of us in the complexity world.

Sunday, December 16, 2018

Guest post: Join SIGACT!

This is a guest post by Samir Khuller and Robert Kleinberg.

Dear friends,

As our research community continues to grow and thrive, SIGACT membership has not grown apace. We respectfully urge you to join SIGACT! Membership is very cheap (and does not require ACM membership) – only $15 a year – and by joining you will be lending your support to the many activities that SIGACT undertakes on behalf of the theoretical computer science research community. These include:

  • sponsoring STOC and other theory conferences such as SPAA and PODC, as well as co-sponsoring SODA;
  • awards such as the Knuth, Gödel, and Kanellakis Prizes, the SIGACT Distinguished Service Award, and the best student paper awards at STOC and SODA;
  • supporting the Women in Theory workshop;
  • representing the theoretical computer science community to the ACM and beyond.

In addition to these community benefits, membership comes with individual benefits including voting rights in SIGACT elections, reduced rate for membership in EATCS, reduced conference registration rates at SIGACT-sponsored conferences, access to SIGACT News and announcements sent on the SIGACT email list.

SIG membership does not automatically renew when you renew your ACM membership, and we suspect this may be one reason for the decline in SIGACT membership. So the next time you renew your ACM membership, remember to also join SIGACT or renew your SIG membership! Better yet, why wait? If you’re not a SIGACT member, join right now- you can use this link: here

Please do your part to nurture this important resource for our community.

Respectfully,

The SIGACT Executive Committee

Thursday, December 13, 2018

Inverting Onto Functions

Here's an open question that goes back to a 2003 paper  that I wrote with Steve Fenner, John Rogers and Ashish Naik. The conference paper goes back to 1996.

In that paper we discuss two hypothesis we badly called Q and Q' and it still remains open whether the two hypotheses are equivalent.

Q has a number of equivalent definitions, including

  • For all NP machines M that accepting all strings, there is polynomial-time computable function f such that f(x) is an accepting path of M(x) for all x.
  • For every onto honest polynomial-time computable function g there is a polynomial-time computable function f such that f finds an inverse of g, more precisely g(f(g(x))) = g(x) for all x.
  • TFNP is computable in FP.
For lots more equivalences see the paper

Q' is the bit version of Q. For example

  • For all NP machines M that accepting all strings, there is polynomial-time computable function f such that f(x) outputs the first bit of an accepting path of M(x) for all x.
  • For every onto honest polynomial-time computable function g there is a polynomial-time computable function f such that f finds the first bit of an inverse of g, more precisely for all x there is a y such that g(y) = x and f(x) is the first bit of y.
Now Q implies Q', if you can find an accepting path of M(x) you can just read off the first bit. Does Q' imply Q? 

If P = NP you can find solutions using self-reductions. For Q' self-reduction gets stuck because as you start filling in bits you may lose the "onto" promise. 

On the other hand we don't even know any relativized worlds where Q' is true and Q is false. So either prove that Q' implies Q or show a relativized world where Q' is true and Q is false.

How often can I dole out 22-year old open problems that don't require deep complexity to understand. Can't promise what techniques you'll need to solve it.

Sunday, December 09, 2018

Super Asymmetry on The Big Bang Theory: How Realistic?

The TV show The Big Bang Theory portrays academia so I am naturally curious how realistic it is. I have posted about this before (see here) in the context of whether actual things they say about physics are true. Today I post about a recent arc where Amy and Sheldon are working on Super asymmetry.


SPOILER ALERT

1) The name: Super Asymmetry. Its not a field but it could be. I assume its about particle physics but I'm not sure they ever say this. A fine name!

2) Amy is a neurobiologist (this was flagged as not being word, but I think it  is) working with Sheldon on a physical theory that I would assume requires hard math.  Physics is hard! So I wonder how realistic this is. Actually, more important than being hard is that you need a lot of background knowledge. So the questions of interest is: Can an amateur still help in a discovery of a new physical theory? This may depend on the definitions of amateur, discovery, new, and physical.  Alone I would doubt it. But with help from Sheldon, I can believe it. Still, making new discoveries in an old field is hard.

3) Amy and Sheldon first had the idea for super asymmetry on their honeymoon. Most married couples have other things to do on their honeymoon. (I did ask my darling to prove the primes were infinite on our wedding day before I married her. She was nervous so couldn't do it, but normally she could. I know a mathematician who made her spouse memorize the definition of a Banach Space before they got married, and recite it to her on their wedding day before they got married.)

4) After they do most of the work they THEN go track down references. This seems stupid but not unrealistic. You can get excited about a theory and work on it at breakneck speed and not want to slow down to check references. But see next point.

5) Sheldon was counting on this for a Nobel Prize. I would think you would check refs before even thinking in those terms.

6) An article in Russian was found that proved the theory could not work. There are a few things wrong with this:

a) The article used the exact same phrase ``Super Asymmetry'' - that seems unlikely.

b) They seemed to not READ the article, just the first page, and then say. DARN, all that work down the tubes.

c) They seemed to not even try to say `OKAY, they did BLAH, we did BLAH BLAH, how do they compare and contrast' (ADDED LATER- I just saw the episode afterwards. They probably DO have something after all. They should have listened to my advice before going into a funk.)

d) If they did all of that work I am sure SOMETHING can be recovered from it.

7) This is not really a post about The Big Bang Theory. I want to know more about your experiences with research: have you worked on a problem and found out it didn't work or was already done, or something like that. And what happened?


Wednesday, December 05, 2018

Remembering George H. W. Bush

Today is the national day of mourning for George Herbert Walker Bush, one of the best presidents for science and computing. He created PCAST, the President's Council of Advisors on Science and Technology. Bush signed the High Performance Computing Act (introduced by Al Gore), that powered computing research and the Internet through the massive growth of the 90's. His administration started the Human Genome Project and the US Global Change Research Program. He appointed the first and so far only African-American NSF Director.

Bush also started the the short-lived Presidential Faculty Fellows program. As a member of the first class of fellows I got invited to a ceremony in the Rose Garden in June of 1992. I didn't actually get to shake hands with President Bush; in that busy election year we had a joint ceremony with some high school award winners and the National Medal of Technology recipients that included Bill Gates and Joseph Woodland, who invented the bar code scanner used at supermarkets. George Bush famously may or may not have been amazed by this technology a few months earlier at a grocers convention and had no issues joking about it when introducing Woodland.

Sipping lemonade on the White House lawn is not an experience one soon forgets. And I guess I haven't twenty-six years later. Thanks President Bush and God speed.


Sunday, December 02, 2018

George HW Bush passed away- some non-partisan math comments

George HW Bush passed away recently. When he was alive there were 5 living ex presidents. Now there are 4. What is the max and min number of ex presidents? This we will answer. What is the prob of having many living ex-presidents?

What is the max number of ex-presidents alive at the same time? List the times this has happened. Your answer should be a list of statements of the following form:

 Shortly after X took office there were Y ex-presidents: Z(1), Z(2), ... , Z(Y).

I leave a little white space in case you want to try to figure it out, though the point of this post is not to quiz you.
















ANSWER: The max number of ex-presidents alive at the
same time is five. This has happened four times.

--------------------------------------------
ONE: In 1861 just after Lincoln took office there were five living ex-presidents:
Martin van Buren (died in 1862), John Tyler (died in 1862), Millard Fillmore (died in 1874), Franklin Pierce (died in 1869), James Buchanan (died in 1868).

Key factors: (1) Between 1836 and 1860 there were no 2-term presidents, (2) Martin van Buren lived a long time after being president.

TWO: In 1993 just after Clinton took office there were five living ex-presidents:
Richard M. Nixon (died in 1994), Gerald Ford (died in 2006), Jimmy Carter (still alive as of Dec 2018), Ronald Reagan (died in 2004), George HW Bush (died in 2018).

Key factors: (1) Nixon, Ford, Carter, Bush Sr were the equivalent of 4 one-terms, and (2) Reagan lived a long time after being president.

THREE: In 2001 just after George W. Bush took office there were five living ex-presidents:
Gerald Ford (died in 2006), Jimmy Carter (still alive as of Dec 2018), Ronald Reagan (died in 2004), George Bush (died in 2018).  Bill Clinton (still alive as of Dec 2018).

Key factors: (1) Ford, Carter, Bush Sr. were effectively 3 one-terms, and (2) Reagan lived a long time after being president.

FOUR: In 2017 just after Donald Trump took office there were five living ex-presidents:
Jimmy Carter (still alive as of Dec 2018), George  HW Bush (died in 2018).  Bill Clinton (still alive as of Dec 2018).  George W Bush (still alive as of Dec 2018).  Barack Obama (still alive as of Dec 2018).

Key factors: (1) Carter, Bush were both one-termers,  (2) Clinton and W are relatively young for presidents and in good health, and  (3) Carter and Bush Sr. lived  a long time (Carter is still living!)

------------------------------------

I want to see this record broken! I want to see 6 living ex presidents! (Darling asks why I want to see that. Its a good question which I will partially address later.) Hence I want to see Donald Trump impeached or resign or leave office!  I was hoping it would would happen before one of Carter, Bush Sr, Clinton, W, Obama died. Oh well.

So now what? Is it possible that we will see 6 living ex-presidents in our lifetime. Factors: prez longevity, prez age, one-term vs two-term, and since I am asking about in OUR lifetime, our longevity.

Lets assume that neither The Donald nor any other president resigns or gets impeached or leaves office before their term is up. We assume that the presidents after Trump are  Alice, Bob, Carol.

Scenarios:

ONE: Donald Trump loses to Alice in 2020, Alice loses to Bob in 2024. None of the ex presidents dies before 2025. Then we would have, in the first day of the Bob presidency, which would be  in  2025,  6 living ex presidents:  Carter, Clinton, W, Obama, Trump, Alice.

This needs Carter to live to be about 100 (the others are much younger). Possible!

TWO: Donald Trump loses  to Alice in 2020, Alice loses to Bob in 2024 . Bob loses to Carol in 2028. Carter passes away before 2025 but the other ex presidents are alive in 2029. Then we would have, in the first day of the Carol presidency, which would be in 2029, 6 living ex presidents:Clinton, W, Obama, Trump, Alice, Bob.


This needs W, Clinton, Trump  to live to be about 83 and Obama to live to be 72.  Possible!

I'll stop here, but you can make up your own SCENARIO THREE which requires some people to live to 87.

Scenario ONE seems unlikely. TWO and THREE are plausible; however, there is another factor. I am assuming a long string of one-termers (that was flagged as not-a-word. Oh well.)  Lately incumbency has a big advantage: Clinton, W, Obama were all two-termers. Incumbency is powerful for two reasons that reinforce each other:

The incumbent can DO things, can LOOK presidential.

Since the incumbent has these advantages people are scared to run against him or her.

--------------------------------------
Math problem: What is the probability that we will see 6 living ex presidents by 2029? To solve this you would need to know

Longevity statistics. But of what group? by Age? by profession? of ex-presidents? That seems to narrow for good statistics.

Incumency statistics. How likely is it for a Prez to be re-elected? Again, too small a sample size. And Trump seems like an outlier. I suspect that if Jeb or Hillary were president they would get re-elected because of the incumbency advantage. But Trump is so unusual that it might not hold. One thing in his favor: it is unlikely there will be a challenge from his own party. One thing in his disfavor would be a third party challenge. But ENOUGH. My point is that it would be hard to do good stats here.

-------------------------------------


So why do I care about seeing 6 living ex-presidents in my lifetime? I have a reason but its not a good reason.

Early in the Nixon Presidency LBJ died. I noticed that there were ZERO living ex-presidents. I knew that LBJ was dead, and JFK was dead, and I suspected (correctly) that Eisenhower and Truman were dead, and I knew FDR was dead. Before that we have Hoover and others who were of course dead. I was SO PROUD of myself for KNOWING this (to be fair I was 12 years old). This sparked my interest in presidents and especially in the question of most/least living ex-prez.

Now for the obvious question on the other end of the spectrum:

What is the min number of ex-presidents alive at the same time? And when did it occur (list all times)

White space for those who want to try to figure it out or look it up.
























ANSWER: Zero. This happened six times.

ONE: When George Washington was president there obviously were zero living ex-presidents.

TWO: Shortly after John Adams became president George Washington died. At that time there were zero ex-presidents.

THREE: During Ulysses S Grant's term Andrew Johnson, the prior president died. Lincoln was dead by assassination and all prior presidents were dead of old age  or similar (e.g., James Buchanan died at the age of 77, Franklin Pierce (an ancestor of Barbara Bush (nee Pierce) was 65 and died of cirrhosis of the liver, from alcoholism.)

FOUR: During Theodore Roosevelt's term Grover Cleveland died, and all other ex-presidents were dead.  Recall that the prior prez, McKinley, had been assassinated.

FIVE: During Herbert Hoover's term, following Calvin Coolidge's death (Hoover's predecessor), there were no ex-presidents.  This partially explains why Coolidge didn't run- he had health problems.  Note that Harding died in office.

SIX: During Nixon's term, in 1973, Lyndon Johnson died. At that time there were zero ex-presidents.  This was because Lyndon Johnson died young (65), Kennedy was assassinated, Eisenhower was old while president.


NOTE: I would have thought that since FDR served so long and died in office either during FDR's term or Harry Truman's term there would be a time with no living ex-presidents.  Early in FDR's term there was only one living ex-president: Hebert Hoover. However, he didn't die until 1964. Hence he lived through the presidencies of FDR, Truman, Eisenhower, Kennedy, and part of Johnson's.  This is NOT the most presidents an ex-president has lived through after they step down. That honor might go to Carter who has lived through the presidencies of Reagan, Bush Sr, Clinton, W, Obama, and, as of this writing, a few years of Trump.  I have not checked if this is a record but I will once Carter passes away.

NOTE: In most of the cases above a recent president had died prematurely. Grant- Lincoln, Roosevelt- McKinley, Hoover- Coolidge, Nixon- Johnson and Kennedy.)

NOT: Since Obama, W, and Clinton are all relatively young, and presidents dying in office is now very rare (the last one was JFK in 1963)   I doubt this will happen again. But politics and history can surprise you.








Wednesday, November 28, 2018

LOGCFL Venkat Style

H. Venkateswaran, a much loved professor in the School of Computer Science at Georgia Tech and a fellow computational complexity theorist, is retiring at the end of December. In honor of Venkat I'd like talk about my favorite paper of his, relating LOGCFL to semi-unbounded circuits.

Let's start with context-free languages. Even if you never took a theoretical computer science course, you probably saw them in elementary school.


A context-free language is a series of rules like S-> NP VP or N->man. The context-free part comes from the fact that a noun phrase (NP) produces the same sentence fragments wherever it appears. CFLs have a rich theory--there have been whole textbooks devoted to the topics.

LOGCFL are the set of problems that are reducible to context-free languages with a small-space reduction. Formally, A is in LOGCFL if there is a CFL B and a log-space computable function f such that for all x, x is in A if and only if f(x) is in B.

Venkat showed that LOGCFLs are equivalent to semi-unbounded circuits, log-depth circuits with unbounded OR gates but bounded AND gates, the class now called SAC1 (technically the equivalence holds for log-space uniform SAC1 but that's not important). His proof goes through various models of alternating Turing machines and push-down automata.

Context-free languages are not closed under complement, for example 0n1n0n is not context-free but its complement is. Somewhat surprisingly Borodin, Cook, Dymond, Ruzzo and Tompa showed that LOGCFL is closed under complement, combining the Immerman-Szelepcsényi inductive counting technique with Venkat's circuit characterization of LOGCFL.

The Borodin result implies that you whether you have bounded ORs and unbounded ANDs, or bounded ANDs and unbounded ORs, you compute the same class.

Enjoy your retirement Venkat. We'll miss you!

Sunday, November 25, 2018

If you think a theorem is true then spend half your time trying to prove its true, and half trying to prove its false.

There is a quote I recall but not who said it.  I have not been able to find it on the web.


If you think a theorem is true then spend half of your time trying to prove that its true, and half trying to prove that its false.

I thought it was Erdos but I could not find any connection between him and the saying. I did find something that indicates he did not say it:

An Erdos problem that pays different amounts of money for the conjecture being true of false:

--------------------------------------------------------------------------------
For a finite family F of graphs, let t(n,F) denote the smallest integer m that every graph on n vertices and m edges must contain a member of F as a subgraph.

A problem on Turán numbers for graphs with degree constraints} (proposed by Erdös and Simonovits [1]. $250 for a proof and $100 for a counterexample)

Prove or disprove that
t(n,H)=O(n^{3/2})
t(n,H)=O(n^{3/2})
if and only if H does not contain a subgraph each vertex of which has degree >2

Bibliography
1 P. Erdös, Some of my old and new combinatorial problems, Paths, flows, and VLSI-layout (Bonn, 1988), Algorithms Combin., 9, 35-45, Springer, Berlin, 1990.
----------------------------------------------------------------------------------------

A while back there was a $1,000,000 prize for PROVING Goldbach's conjecture (the prize had a deadline which is past). See here. However, the article does not say what you win for a counterexample. I suspect nothing. Is this fair? (1) YES- proving Goldbach would be hard, where as if its false just finding the counterexample likely won't require hard math, (2) NO- HEY, they resolved it, so there, (3) Have a panel look at the solution and decide if it has merit.

ANYWAY- if someone knows the source of the quote, please let me know.

Monday, November 19, 2018

Is Secret sharing REALLY REALLY REALLY used?

Since I am teaching Cryptography this semester I am teaching things people REALLY REALLY REALLY (RRR) use. For some topics this is RRR true, like RSA (that it is used to transmit a private key that is then used is FINE.)

I was wondering if Information-Theoretic Secret Sharing is RRR used. I am asking non-cynically and non-rhetorically. So I want to be taken seriously AND literally.

  I Googled and got some answers but could not verify them.

1) At this site: here we can read

Every modern HSM (hardware secure module, for crypto applications) uses Shamir's secret sharing.

I tried to verify this but was unable to.

I also read

The DNSSEC root key is 5-out-of-7 shared see, e.g., here or here

The link leads to a story about how there are 7 people and if any 5 get together they can shut down the internet. The story does not say they use secret sharing.

2) Threshold cryptography (see here) would seem to use it, but is Threshold crypto used? There is a startup who is trying to use it: see here. I don't count that as RRR since they don't have customers yet. Not sure what the threshold is for whether I count it.

3) Some papers mention that secret sharing COULD be used if you want to make sure nuclear missiles are not launched unless x out of y people say so. But I doubt it really is used. If so this might be very hard to verify.

4) Shamir's secret sharing scheme is pretty simple and does not have big constants so if there is a need it really could be used. But is there a need?

I am not quite sure what I count as proof that someone RRR  using it. A random person at a random website just saying that HSM uses it does not seem convincing. A Wikipedia article saying its being used I would find convincing (should I? Until recently Wikipedia couldn't even get my year of birth straight, see here)

If some companies website said they used Shamir's Secret Sharing I would believe that-- but a companies website is not likely to say that. NOT for reasons of company secrets but because its not what most customers go to the website to find.

SO- if someone RRR knows that Secret Sharing is RRR being used then please leave a comment.

Thursday, November 15, 2018

Simons and Amazon

I'm spending this week at the Simons Institute in smokey Berkeley, California. This fall Simons has a program in Lower Bounds in Computational Complexity. Scroll down that page and you'll see the rather strong collection of participants that are spending much of the fall at the Institute.

I purposely chose a week without a workshop--I'd rather talk to people than sit in talks. My former PhD student Rahul Santhanam is hosting me and we are having some fun discussions about the minimum circuit-size problems, pseudorandom generators and white vs black box algorithms. I've grown a little rusty in complexity during my years as department chair and have to work to keep up with Rahul. The student has become the master.

Even a quiet week at Simons is not that quiet. Every night seems to have a theme: trivia, board games, pub night, music night. I participated in a discussion with the "journalist in residence" on how to make lower bounds interesting to the general public. As part of a Turing awardee lecture series, Andy Yao gave a talk on Game Theory in Auction and Blockchain which included some reminiscing of Yao's golden time in Berkeley back in the early 80's when he helped lay the mathematical foundations of modern cryptography.

Simons started as a competition and, while I was on team Chicago, I have to admit Berkeley has done a wonderful job with the institute. We've just seen the results from another competition, with Amazon splitting their "second headquarters" between Northern Virginia and Queens, missing an awesome opportunity in Atlanta (not that I'm biased). Not surprised about the DC area, but pretty surprised about separating the HQ into two locations, neither planned to reach the level of activity of HQ1 in Seattle. Amazon stated that they didn't think they could find the tech talent to fill 50,000 positions in a single city. So much for "build it and they will come".