For this post I will only count the operations PLUS, MINUS, MULT. They may be done on rather large numbers.
Recall that from the work coming out of Hilberts 10th problem we know the following: For every c.e. set (used to be called r.e., some people still do) there is a polynomial f in 13 or less variables (we'll assume 13) with coefficients in the integers such that
x in A iff (∃ a1,...,a13))[f(x,a1,...,a13)=0]
In an article about Hilbert's 10th problem written in 1974 by Davis-Matiyasevich-Robinson they note that by this result there is a FINITE number M such that, for ALL primes p, there is a certification that p is prime that uses at most M operations: given p a prime let a1,...,a13 be such that f(p,a1,...,a13)=0. The certification that p is prime is just the evaluation of that polynomial and seeing that its 0.
Is this still the only proof that one can certify primality in a CONSTANT number of operations?
Primes is irrelevant to all of this--- any c.e. set would work. (The result for c.e. sets may qualify as a theorem that is less interesting because its more interesting.) But for primes I am wondering if there is another way to do this- perhaps using number theory, perhaps with a smaller value of M. For the explicit poly for primes, due to Jones, see here.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Monday, July 29, 2013
Thursday, July 25, 2013
Ph.D. Attrition
Leonard Cassuto writes in the Chronicle an article Ph.D. Attrition: How Much Is Too Much? He presupposes the answer with the subtitle "A disturbing 50 percent of doctoral students leave graduate school without finishing".
The 50% goes over all fields but the numbers in computer science are somewhat in that range. Computer Science has different issues than humanities and theoretical CS has not quite the same issues as the rest of CS. Certainly we lose several students to start-ups and high-paying jobs. But what about the ones that just have trouble in grad school.
Cassuto writes
For the rest of us, you have a choice. You can either take someone who will probably work their way to a Ph.D. but with uninspired research, or those you can take a risk with a student who might have strong potential. Some of those students become great scientists, some of them flame out. You get a higher attrition rate by taking risks but that's not a bad thing.
If you do take a risk in admissions you need to encourage students to "pursue other opportunities" once you realize they won't make it. That's a process that too many of us try to avoid, so we don't take those risks as much as we should.
The 50% goes over all fields but the numbers in computer science are somewhat in that range. Computer Science has different issues than humanities and theoretical CS has not quite the same issues as the rest of CS. Certainly we lose several students to start-ups and high-paying jobs. But what about the ones that just have trouble in grad school.
Cassuto writes
Perhaps they lack the temperament to work on their own (which undergraduate work does not test as severely as graduate school does), or perhaps they lack, say, the mathematical chops necessary to succeed at advanced physics. But there will be a number—and if admissions committees do a good job, it will be very small—who won't be able to finish because they're not up to the demands of the task.Having read through many graduate applications through the year there are very few, perhaps on average one or two a year, that will clearly succeed through graduate school. Almost without exception those students go to MIT or Berkeley.
For the rest of us, you have a choice. You can either take someone who will probably work their way to a Ph.D. but with uninspired research, or those you can take a risk with a student who might have strong potential. Some of those students become great scientists, some of them flame out. You get a higher attrition rate by taking risks but that's not a bad thing.
If you do take a risk in admissions you need to encourage students to "pursue other opportunities" once you realize they won't make it. That's a process that too many of us try to avoid, so we don't take those risks as much as we should.
Tuesday, July 23, 2013
I gave a poster session at Erdos 100- so how did it go?
In a a prior post I suggested that STOC perhaps have people give posters instead of talks. While I doubt this will ever happen I think its worth thinking about, especially for future conferences that may be founded. I also noted that the NIPS conference they do this.
But enough theory- at the Erdos 100th I GAVE a poster. Here are my thoughts.
This was overall a positive experience but, again, tiring.
So would this work for STOC/FOCS or other existing conferences? We would have to adjust our mentality to thinking that posters were not less prestigious. I don't think this will happen. But what about a new conference? If some new conference in theory gets started perhaps they should look into this model. A new conference does not have to follow the STOC/FOCS model.
But enough theory- at the Erdos 100th I GAVE a poster. Here are my thoughts.
- The paper I did a poster I posted on here and I posted to arxiv here. A bit awkward in that it was submitted to ERDOS 100 as USING THE ERDOS-RADO CAN RAMSEY THEOREM ON A PROBLEM ERDOS ASKED AND A PROBLEM ERDOS SHOULD HAVE ASKED, but by the time the conference came I had much better results (due mostly to co-authors of which I went from 1 to 4) and no longer used ERDOS RADO CAN RAMSEY. Do I do the Poster on what was submitted or what I have now? I picked a very nice proof to concentrate on for the poster that was new but still in the spirit of what was submitted. (The final version is being written- I'll post on this blog about it later.)
- The posters were for TWO days, for TWO hours after lunch. Since it was after lunch they didn't serve food. This seemed to work. They were in two shifts-- some did Tu-Wed and some did Th-Fri (I did Th-Fri).
- My actual Poster was terrible. But me talking about it and pointing to things was good. This was true in general- other peoples posters were hard to understand if the person wasn't there to clarify and explain, but was pretty good if they were. And it was nice to be able to ask questions directly and interrupt, unlike talks.
- As someone LISTENING to a poster talk it was better than a real talk. In one case I listened, went home that night,
wrote some things down, realized I missed a point, and asked him again the next day.
- As someone GIVING a poster talk... it was very odd. I explained my results and a simple proof of one of them about 40 times in a 2 day period. I happen to like this (note that I've taught VDWs theorem at least W(6,2) times). But even though I like it, it was tiring. You know how it is ---- the first 35 times you explain a theorem you're excited about it, but then it got to be old hat (which would have been fine if it was a talk on a hat problem).
- There were 60 posters.
This was overall a positive experience but, again, tiring.
So would this work for STOC/FOCS or other existing conferences? We would have to adjust our mentality to thinking that posters were not less prestigious. I don't think this will happen. But what about a new conference? If some new conference in theory gets started perhaps they should look into this model. A new conference does not have to follow the STOC/FOCS model.
Friday, July 19, 2013
A(nother) nice use of Gen Functions
In a prior post I tried to give a simple example of a proof that uses Gen Functions where there was no other way to do it. For better or worse, before I posted it, my HS student Sam found a better way and I posted both proofs.
I have another example. Noga Alon showed this to be over dinner at the Erdos 100th Bday conference. (He claims that the proof he showed me is NOT his but he doesn't know whose it is. I will still call it Noga's Proof for shorthand.)
Let
A+A = { x+y : x,y ∈ A}
A+*A = { x+y : x,y ∈ A and x ≠ y }
We take both to be multisets.
Assume A is a set of natural numbers. When does A+*A determine A?
If A is of size 2 then NO, A+*A does not determine A as we could have x+y=5 but not know if A is {1,4} or {2,3}.
What if A is of size 3? Then YES:
First determine S=((x+y)+(x+z)+(y+z))/2=x+y+z.
Then determine
x = S - (y+z)
y = S - (x+z)
z = S - (x+y)
What if A has four elements? Does there exists A,B of size 4, different, such that A+*A=B+*B?
YES:
A = {1,4,12,13}
B = {2,3,11,14}
For which n does does A+*A, where A is of size n, determine A?
Selfridge and Strauss showed that this happens iff n is NOT a power of two. I have a write up Noga's proof. The original proof, in this paper, does not use gen functions and also applies to sets of complex numbers. I think Noga's proof can be modified to apply here. Which proof is better? A matter of taste; however, Noga's proof can be sketched on a greasy paper placemat in an outdoor restaurant in Budapest while the original proof cannot.
I have another example. Noga Alon showed this to be over dinner at the Erdos 100th Bday conference. (He claims that the proof he showed me is NOT his but he doesn't know whose it is. I will still call it Noga's Proof for shorthand.)
Let
A+A = { x+y : x,y ∈ A}
A+*A = { x+y : x,y ∈ A and x ≠ y }
We take both to be multisets.
Assume A is a set of natural numbers. When does A+*A determine A?
If A is of size 2 then NO, A+*A does not determine A as we could have x+y=5 but not know if A is {1,4} or {2,3}.
What if A is of size 3? Then YES:
First determine S=((x+y)+(x+z)+(y+z))/2=x+y+z.
Then determine
x = S - (y+z)
y = S - (x+z)
z = S - (x+y)
What if A has four elements? Does there exists A,B of size 4, different, such that A+*A=B+*B?
YES:
A = {1,4,12,13}
B = {2,3,11,14}
For which n does does A+*A, where A is of size n, determine A?
Selfridge and Strauss showed that this happens iff n is NOT a power of two. I have a write up Noga's proof. The original proof, in this paper, does not use gen functions and also applies to sets of complex numbers. I think Noga's proof can be modified to apply here. Which proof is better? A matter of taste; however, Noga's proof can be sketched on a greasy paper placemat in an outdoor restaurant in Budapest while the original proof cannot.
Tuesday, July 16, 2013
DUMP YOUR TABLES! (the moral of my story that started with a hat problem)
Recall from my last post:
PROBLEM 1: There are n people sitting on chairs in a row. Call them p1,...,pn. They will soon have HATS put on their heads, RED or BLUE. Nobody can see their own hat color. pn can see p(n-1),...,p1. More generally, pi can see all pj j < i.
Here is the game and the goal: Mr. Bad will put hats on people any way he likes (could be RBRBRB..., could be RRRBBB, could be ALL R's - like when a teacher has a T/F test where they are all FALSE.)
Then pn says R or B, p(n-1) says R or B, etc. When people say the color everyone else can hear it.
They want to MAXIMIZE how many of them say THEIR hat color. The people can meet ahead of time to discuss and agree on a strategy.
Mr. Bad knows the strategy the people will use.
What is the best they can do? Answer: n-1:
pn says RED if the number of REDS he sees is EVEN, BLUE if the number of REDS he sees is ODD. p(n-1) sees all ahead of him, knows the parity of all of them, can deduce his own hat. So can everyone ahead of him- KEY is that they use BOTH what they heard from the people who already spoke and what they see ahead of them. So can do n-1. (NOTE- a nice but not-optimal solution that some people have told me is to use the first log n people to code how many of the remaining hats are RED- this yields n- log(n) correct.)
PROBLEM 2: Same as Problem 2 but now there are c colors of hats.
That hats are colors 0,1,...,c-1. p(n) SUMS up all of the hats ahead of him MOD c. He says that number. p(n-1) heard that answer, See's whats ahead of him and sums that, and can deduce his own color. Again n-1 get it right.
OKAY, that's the problem and the answer. NOW my story and point:
I once had a group of College Students in a summer program working on PROBLEM 1. The plan was that they would first do the people-in-a-row-2-colors version, then people-in-a-row-c-colors version, then other versions. One can learn much math from looking at many variants. They began with the 2-color case and begun working out some examples. They had these tables (Note the word TABLES for later) for the n=3 case - really large decision trees- that (I think) did yield 2 people correct. They then had a table for n=4 where (I think) 2 people correct. They worked out a few more as well, perhaps getting up to n=8. The tables got larger and larger and more complicated. I never did quite understand their tables; however, they may have been doing an ad hoc version of the strategy where
n-log(n) people get the correct hats.)
I let them go on (perhaps too long) since they kept telling me NO BILL, DON"T TELL US HOW TO DO IT, IF YOU KNOW. And I was hoping they would have a breakthrough. But by the end of the second week they still hadn't gotten it (NOTE- this is not an indication that they were bad students--- its hard to tell how hard it is to see the trick once you know it) and asked me if I knew how to do it. I told them the solution above using Parity. I THOUGHT they would say OH, that's very nice, now lets see if we can do something similar for c-colors. But no. They insisted that their solution using tables was more intuitive or more informative or more ... something. None of that is remotely true. What is true is that by that point they were emotionally invested in their tables.
I kept saying DUMP YOUR TABLES now that you have a better way of doing it. They never did. But the phrase DUMP YOUR TABLES I now
use to mean DUMP SOME OLD WAY OF DOING THINGS THAT YOU ARE EMOTIONALLY ATTACHED TO BUT REALLY DOES NOT WORK.
Once you are aware of this phenomena you can see it often.
PROBLEM 1: There are n people sitting on chairs in a row. Call them p1,...,pn. They will soon have HATS put on their heads, RED or BLUE. Nobody can see their own hat color. pn can see p(n-1),...,p1. More generally, pi can see all pj j < i.
Here is the game and the goal: Mr. Bad will put hats on people any way he likes (could be RBRBRB..., could be RRRBBB, could be ALL R's - like when a teacher has a T/F test where they are all FALSE.)
Then pn says R or B, p(n-1) says R or B, etc. When people say the color everyone else can hear it.
They want to MAXIMIZE how many of them say THEIR hat color. The people can meet ahead of time to discuss and agree on a strategy.
Mr. Bad knows the strategy the people will use.
What is the best they can do? Answer: n-1:
pn says RED if the number of REDS he sees is EVEN, BLUE if the number of REDS he sees is ODD. p(n-1) sees all ahead of him, knows the parity of all of them, can deduce his own hat. So can everyone ahead of him- KEY is that they use BOTH what they heard from the people who already spoke and what they see ahead of them. So can do n-1. (NOTE- a nice but not-optimal solution that some people have told me is to use the first log n people to code how many of the remaining hats are RED- this yields n- log(n) correct.)
PROBLEM 2: Same as Problem 2 but now there are c colors of hats.
That hats are colors 0,1,...,c-1. p(n) SUMS up all of the hats ahead of him MOD c. He says that number. p(n-1) heard that answer, See's whats ahead of him and sums that, and can deduce his own color. Again n-1 get it right.
OKAY, that's the problem and the answer. NOW my story and point:
I once had a group of College Students in a summer program working on PROBLEM 1. The plan was that they would first do the people-in-a-row-2-colors version, then people-in-a-row-c-colors version, then other versions. One can learn much math from looking at many variants. They began with the 2-color case and begun working out some examples. They had these tables (Note the word TABLES for later) for the n=3 case - really large decision trees- that (I think) did yield 2 people correct. They then had a table for n=4 where (I think) 2 people correct. They worked out a few more as well, perhaps getting up to n=8. The tables got larger and larger and more complicated. I never did quite understand their tables; however, they may have been doing an ad hoc version of the strategy where
n-log(n) people get the correct hats.)
I let them go on (perhaps too long) since they kept telling me NO BILL, DON"T TELL US HOW TO DO IT, IF YOU KNOW. And I was hoping they would have a breakthrough. But by the end of the second week they still hadn't gotten it (NOTE- this is not an indication that they were bad students--- its hard to tell how hard it is to see the trick once you know it) and asked me if I knew how to do it. I told them the solution above using Parity. I THOUGHT they would say OH, that's very nice, now lets see if we can do something similar for c-colors. But no. They insisted that their solution using tables was more intuitive or more informative or more ... something. None of that is remotely true. What is true is that by that point they were emotionally invested in their tables.
I kept saying DUMP YOUR TABLES now that you have a better way of doing it. They never did. But the phrase DUMP YOUR TABLES I now
use to mean DUMP SOME OLD WAY OF DOING THINGS THAT YOU ARE EMOTIONALLY ATTACHED TO BUT REALLY DOES NOT WORK.
Once you are aware of this phenomena you can see it often.
- You have a proof that uses a certain technique that you like (in my case perhaps Ramsey Theory) but then a better proof comes along. You have to admit that the new proof is better. DUMP YOUR TABLES.
- Your proof idea is beautiful but it just doesn't work. SHOULD YOU DUMP YOUR TABLES? Hard to tell- might work later.
- You get emotionally attached to a certain way to teach a course. Times change, technology changes, and perhaps you should DUMP YOUR TABLES.
- I have an idea for a blog entry that I think is really good and I begin writing it, and it just isn't working. I SHOULD DUMP MY TABLES.
- Sometimes in a story there is ONE really good idea and the rest is crap. This might be that the author had ONE really good idea
but could not build a good story around it. He should have DUMPED HIS TABLES.
- You have a phrase that you are fond of but it distracts from the point you are trying to make. You should
DUMP YOUR TABLES
(See Here For a case).
Monday, July 15, 2013
A problem and later a story and a point.
I have (1) a math problem to tell you about (though I suspect many readers already know it), (2) a story about it, and (3) a point to make. TODAY I'll just do the math problem. Feel free to leave comment with solutions--- so if you haven't seen it before and want to try it, then don't look at the comments. Tommorow or later I will tell you the story and make my points.
PROBLEM 1: There are n people sitting on chairs in a row. Call them p1,...,pn. They will soon have HATS put on their heads, RED or BLUE. Nobody can see their own hat color. pn can see p(n-1),...,p1. More generally, pi can see all pj j < i. They CAN meet ahead of time to discuss strategy.
Here is the game and the goal: Mr. Bad will put hats on people any way he likes (could be RBRBRB..., could be RRRBBB, could be ALL R's - like when a teacher has a T/F test where they are all FALSE.) Then pn says R or B, p(n-1) says R or B, etc. They want to MAXIMIZE how many of them say THEIR hat color. Assume that Mr. Bad knows the strategy the people will use.
What is the best they can do?
Here is a strategy: pn says R if the MAJORITY are R, and B if the MAJORITY are B, and then everyone says what pn says. They are guaranteed around n/2 correct.
Here is a strategy: Assume n is even. pn says the color of p(n-1). p(n-1) then says what pn said and gets it right. then p(n-2) says what p(n-3) has. Then p(n-3) gets it right. You are guaranteed to get around n/2 right.
GEE- can we do better than n/2? Or can one prove (perhaps using Ramsey Theory, perhaps something I learned at Erdos 100 over dinner) that you can't beat n/2 (or perhaps something like n/2 + log(log(n))).
PROBLEM 2: Same as Problem 2 but now there are c colors of hats.
NOTE- there are MANY hat problems and MANY variants of this scenario--- some where you want to maximize prob of getting them all right, some where everyone sees everyones hat but their own. These are all fine problems, but I am just talking about (1) people are in a row, (2) Want to maximize how many they get right in the worst case.
ADDED LATER- WARNING- THE ANSWER TO PROBLEM 1 IS IN THE COMMENTS NOW.
SO IF YOU WANT TO SOLVE IT YOURSELF DO NOT LOOK AT THE COMMENTS.
PROBLEM 1: There are n people sitting on chairs in a row. Call them p1,...,pn. They will soon have HATS put on their heads, RED or BLUE. Nobody can see their own hat color. pn can see p(n-1),...,p1. More generally, pi can see all pj j < i. They CAN meet ahead of time to discuss strategy.
Here is the game and the goal: Mr. Bad will put hats on people any way he likes (could be RBRBRB..., could be RRRBBB, could be ALL R's - like when a teacher has a T/F test where they are all FALSE.) Then pn says R or B, p(n-1) says R or B, etc. They want to MAXIMIZE how many of them say THEIR hat color. Assume that Mr. Bad knows the strategy the people will use.
What is the best they can do?
Here is a strategy: pn says R if the MAJORITY are R, and B if the MAJORITY are B, and then everyone says what pn says. They are guaranteed around n/2 correct.
Here is a strategy: Assume n is even. pn says the color of p(n-1). p(n-1) then says what pn said and gets it right. then p(n-2) says what p(n-3) has. Then p(n-3) gets it right. You are guaranteed to get around n/2 right.
GEE- can we do better than n/2? Or can one prove (perhaps using Ramsey Theory, perhaps something I learned at Erdos 100 over dinner) that you can't beat n/2 (or perhaps something like n/2 + log(log(n))).
PROBLEM 2: Same as Problem 2 but now there are c colors of hats.
NOTE- there are MANY hat problems and MANY variants of this scenario--- some where you want to maximize prob of getting them all right, some where everyone sees everyones hat but their own. These are all fine problems, but I am just talking about (1) people are in a row, (2) Want to maximize how many they get right in the worst case.
ADDED LATER- WARNING- THE ANSWER TO PROBLEM 1 IS IN THE COMMENTS NOW.
SO IF YOU WANT TO SOLVE IT YOURSELF DO NOT LOOK AT THE COMMENTS.
Thursday, July 11, 2013
Combinatorics use to not get any respect. But because of Erdos...
(This blog is based on things I heard at the Erdos 100th Bday Conference)
I have spend the last week at the Erdos 100th bday conference. One point that was made many times: the acceptance of Combinatorics by the mathematics community and Erdos's effect on that.
In the 1950's combinatorics was seen as recreational but not as serious math. In the 1970's you could get a PhD in it but it was still seen as suspect. Even at the time of Erdos's death (September 1996) it was still not that well regarded. Now it is, as evidenced by Szemeredi getting the Abel Prize (Gowers and Tao getting the Fields Medal is also evidence, though not as strong since one could argue that they are not really combinatorists). What changed?
I have spend the last week at the Erdos 100th bday conference. One point that was made many times: the acceptance of Combinatorics by the mathematics community and Erdos's effect on that.
In the 1950's combinatorics was seen as recreational but not as serious math. In the 1970's you could get a PhD in it but it was still seen as suspect. Even at the time of Erdos's death (September 1996) it was still not that well regarded. Now it is, as evidenced by Szemeredi getting the Abel Prize (Gowers and Tao getting the Fields Medal is also evidence, though not as strong since one could argue that they are not really combinatorists). What changed?
- I would have thought Szemeredi's theorem (1975) would have turned people around on combinatorics. It didn't. Roth proved the k=3 case in the 1950's, using Fourier Analysis (``Real Math'') but Szemeredi's proof of the general case was ``purely combinatorial'' and hence of less interest. Furstenberg's proof that used Ergodic theory helped put it on the mathematical map (is the Mathematical map a bijection?) but combinatorics still was not well regarded.
- Erdos got many people interested in combinatorics and the connections of it to other areas such as number theory. He had incredibly good taste in problems in that the problems he suggested often lead to deep mathematics of interest, and to more problems of interest. His emphasis on asymptotics, which now seems so natural, was revolutionary at the time and later had applications to computer science. His constant pushing for better and better results, his concept of Proof from THE BOOK his encouraging epsilons and deltas to pursue mathematics, all had a profound affect on mathematics and mathematicians.
- One of the reasons for the disdain was that it was seen as recreational math. This was damming for two reasons (1) the problems were not important, and (2) the proofs were easy. Both are unfair. This may have been true at one time but they became less true over time.
- Problems not important: P vs NP is certainly important. Ramsey Theory reveals hidden
regular structure and is important. Much of the work that has gone into better bounds
on the VDW numbers is very important and involves deep mathematics.
- Proofs are easy: People are using Fourier analysis and ergodic theory and others tools that are rather difficult. Here we have the No true Scotsman Fallacy where people claim that if it uses these tools then its not combinatorics. This raises the question of if a field is defined by its methods or by its problems. In any case, people are solving problems in combinatorics using hard methods. But even among so-called easy proofs, they often exhibit the NP-phenomena where they are easy to verify and hence LOOK easy, but are hard to come up with.
- Problems not important: P vs NP is certainly important. Ramsey Theory reveals hidden
- One of the reasons for the respect is computer science. Just as Continuous math was just the right tool for physics, discrete math is just the right tool for computer science. This lead to a rich source of problems for combinatorists that in turn lead to interesting techniques.
- Erdos stressed asymptotics which was just the right approach for computer science.
Monday, July 08, 2013
AltaVista versus Google
Today Yahoo is closing AltaVista, the best search engine before Google. The news caught me by surprise, AltaVista still existed? A number of commentators attribute bad management for AltaVista losing its dominance to Google. But it was an algorithm that killed the search engine.
AltaVista made its claim to fame in the mid-90's by indexing a large number of web pages. AltaVista did very well for obscure search terms like "fortnow" but didn't do so well for more common searches. I used to run a test on search engines by looking for "Holiday Inn", a popular hotel chain in the US. When you search AltaVista for Holiday Inn, the first thing listed was a Holiday Inn in Buffalo, New York. The Holiday Inn home page was nowhere to be found on the search results.
For searches like Holiday Inn, one had to use Yahoo, which back then was not a search engine but a directory tree of web sites. We needed our own directories as well. Ian Parberry maintained the TCS Virtual Rolodex, a list of home pages of theoretical computer scientists, most of which had names common enough that AltaVista wouldn't find them.
A Stanford professor (I can't remember which one) came to give a talk at the University of Chicago around 1997 and he mentioned a research project at Stanford developing a new search engine known as Google. I tested Google with my Holiday Inn test and was in shock when the Holiday Inn home page showed up as the first time. Google passed every other test I could throw at it and I've rarely used any other search engine since. Google made AltaVista, the Yahoo directory and the TCS rolodex irrelevant. Google's PageRank algorithm simply took search to a new level, like the way that Steve Jobs didn't create the first smart phone but completely changed the game with the iPhone. AltaVista managed to survive for another 15+ years but never recovered market share.
The AltaVista story leads to a lesson we still tackle today. Collecting and storing big data is a huge technical challenge but data by itself is of limited value without the algorithms to find the important parts among the muck.
AltaVista made its claim to fame in the mid-90's by indexing a large number of web pages. AltaVista did very well for obscure search terms like "fortnow" but didn't do so well for more common searches. I used to run a test on search engines by looking for "Holiday Inn", a popular hotel chain in the US. When you search AltaVista for Holiday Inn, the first thing listed was a Holiday Inn in Buffalo, New York. The Holiday Inn home page was nowhere to be found on the search results.
For searches like Holiday Inn, one had to use Yahoo, which back then was not a search engine but a directory tree of web sites. We needed our own directories as well. Ian Parberry maintained the TCS Virtual Rolodex, a list of home pages of theoretical computer scientists, most of which had names common enough that AltaVista wouldn't find them.
A Stanford professor (I can't remember which one) came to give a talk at the University of Chicago around 1997 and he mentioned a research project at Stanford developing a new search engine known as Google. I tested Google with my Holiday Inn test and was in shock when the Holiday Inn home page showed up as the first time. Google passed every other test I could throw at it and I've rarely used any other search engine since. Google made AltaVista, the Yahoo directory and the TCS rolodex irrelevant. Google's PageRank algorithm simply took search to a new level, like the way that Steve Jobs didn't create the first smart phone but completely changed the game with the iPhone. AltaVista managed to survive for another 15+ years but never recovered market share.
The AltaVista story leads to a lesson we still tackle today. Collecting and storing big data is a huge technical challenge but data by itself is of limited value without the algorithms to find the important parts among the muck.
Tuesday, July 02, 2013
Computability in Europe
Bill and I are both in Europe this week. I'm in Milan at Computability in Europe and Bill is 500 miles away in Budapest for the Paul Erdős Centenary. The US 4th of July holiday doesn't seem to sway the the Europeans from holding workshops. Bill will report on the star-studded Erdős celebration when he gets back.
So what is "Computability in Europe"? Don't the Europeans use the same Turing machines that we do? Wasn't Turing European?
Or course computation is the same, whether we do it in the US or Europe, Japan or Jupiter, but the emphasis is different. In the US we typically deal with traditional models of computers and see how much time and memory we need to solve various problems. The theme of this year's CiE is "The Nature of Computing" with "nature" being the key word. The conference is co-located with the Unconventional Computation and Natural Computation conference that focuses on different models of computing, especially those that rise from nature like biological computing. The two tutorials this week come from Grzegorz Rozenberg, talking on computing modes based on living cells and Gilles Brassard (whom I didn't recognize without his trademark beard) on quantum models.
Me, I like my computation served straight up on Turing machines, thank you very much.
So what is "Computability in Europe"? Don't the Europeans use the same Turing machines that we do? Wasn't Turing European?
Or course computation is the same, whether we do it in the US or Europe, Japan or Jupiter, but the emphasis is different. In the US we typically deal with traditional models of computers and see how much time and memory we need to solve various problems. The theme of this year's CiE is "The Nature of Computing" with "nature" being the key word. The conference is co-located with the Unconventional Computation and Natural Computation conference that focuses on different models of computing, especially those that rise from nature like biological computing. The two tutorials this week come from Grzegorz Rozenberg, talking on computing modes based on living cells and Gilles Brassard (whom I didn't recognize without his trademark beard) on quantum models.
Me, I like my computation served straight up on Turing machines, thank you very much.
Thursday, June 27, 2013
Friends Don't Let Friends Carpool
The AAA foundation measured cognitive distraction while driving and reported that having a passenger in the car is as dangerous as using a cell phone. On a scale of 1 to 5, a handheld cell phone caused a distraction level of 2.45, a passenger 2.33 and a hands-free phone 2.27. On top of this, distraction causes risk to a passenger as well as a driver, whereas the other side of the cell phone conversation can't be harmed by a driver's distraction.
Since the popular media ignores this risk, as a public service I present some guidelines:
Since the popular media ignores this risk, as a public service I present some guidelines:
- Avoid carpooling whenever possible. While there are some advantages (less traffic, pollution and loss of natural resources), it is worth putting lives of the driver, passengers and others at extra risk?
- If you do carpool, do not talk to each other except in case of emergency.
- If you need to talk, pull over to a safe place and turn off your engine before engaging in conversation.
Car manufacturers must share some of the blame by building cars with multiple seats and not physically separating the driver from the other passengers.
In the same study, the AAA foundation rated solving difficult math and verbal tasks at the top distraction level of 5. So some words of advice particularly for readers of this blog
Don't Drive and Derive
Monday, June 24, 2013
Quantum Tecniques/Gen Functions- don't be afraid of new techniques
Ronald de Wolf gave a GREAT talk at CCC on the uses of Quantum techniques to Classical Problems. He made the analogy of using the Prob Method to prove non-prob results. This reminded me of the following false counterarguments I've heard about new techniques:
The above was my INTENDED POST. However, when I showed the Gen-function proof of Schur's theorem to some students, one of them, Sam (a HS student), came back the next day with a purely combinatorial proof. It was not completely rigorous but I am sure that he and most of my readers, could make it so without too much effort. While having two proofs (Gen-function and Combinatorial) is MORE enlightening for me and for my readers, it does dampen my point that this is a theorem for which the gen-function proof is easier. I COULD argue that the gen-function proof did not require as much cleverness, or that once you put in the rigor it is harder, but I don't really have confidence in those arguments. I include the combinatorial proof in the writeup pointed to above. Which proof is better? A matter of taste. However, I hope you enjoy both of them!
- The Prob Method: Isn't that just counting?
- Kolg complexity: Isn't that just the Prob Method?
- Information complexity: Isn't that just Kolg complexity?
- Counting: Isn't that just Information Complexity?
(Schur's Theorem) Let a1,a2,...,aL be denominations of coins such that no number ≥ 2 divides all of them. Then, for large n, the number of ways to make change of n cents isFor full proof see here. My writeup is based on that in Wilf's book generatingfunctionology (the title page really does use a small g for the first letter).nL-1/((L-1)! a1 a2 ... aL) + O(nL-2)
The above was my INTENDED POST. However, when I showed the Gen-function proof of Schur's theorem to some students, one of them, Sam (a HS student), came back the next day with a purely combinatorial proof. It was not completely rigorous but I am sure that he and most of my readers, could make it so without too much effort. While having two proofs (Gen-function and Combinatorial) is MORE enlightening for me and for my readers, it does dampen my point that this is a theorem for which the gen-function proof is easier. I COULD argue that the gen-function proof did not require as much cleverness, or that once you put in the rigor it is harder, but I don't really have confidence in those arguments. I include the combinatorial proof in the writeup pointed to above. Which proof is better? A matter of taste. However, I hope you enjoy both of them!
Thursday, June 20, 2013
Automate Me
An economist friend asked me if there were still productivity gains to be had for office workers (like us). After all, we have email, social networks, skype and other easy ways to connect with everyone not to mention search for everything. Most tasks are pretty straightforward to do online. How much easier can it get?
There are some obvious answers to his question, such as better automated filtering of all the information thrown at us. But here's what I really would love to see--an automated electronic me.
I have about 115000 email conversations in Gmail not counting spam. Google must have tens or hundreds of billions of emails from everyone combined.
So Google can learn both how many emails are typically answered and also my particular email style. So when I hit reply, Gmail should be able to pre-fill a reasonable reply. I can edit as needed and then send. Saves me much time.
Of course Google will learn from the changes I make and get more accurate each time. After a while I can trust Gmail just to answer a subset of my email. After a while it can answer most of my email. In the future Google can referee papers, write my blog posts and prepare my class lectures.
I can hide out and proof theorems while Google does everything else for me. Until Google proves its own theorems and then I'm just out of a job.
There are some obvious answers to his question, such as better automated filtering of all the information thrown at us. But here's what I really would love to see--an automated electronic me.
I have about 115000 email conversations in Gmail not counting spam. Google must have tens or hundreds of billions of emails from everyone combined.
So Google can learn both how many emails are typically answered and also my particular email style. So when I hit reply, Gmail should be able to pre-fill a reasonable reply. I can edit as needed and then send. Saves me much time.
Of course Google will learn from the changes I make and get more accurate each time. After a while I can trust Gmail just to answer a subset of my email. After a while it can answer most of my email. In the future Google can referee papers, write my blog posts and prepare my class lectures.
I can hide out and proof theorems while Google does everything else for me. Until Google proves its own theorems and then I'm just out of a job.
Monday, June 17, 2013
Fraud or not ?
For each of these, are they frauds?
- The Turk was a chess playing ``computer'' (around 1770) that was later discovered to be cheating--- a human made the moves. As Ken Regan knows well, we now have the opposite problem- humans who cheat by having a computer make the moves. Note that the Turk still played an excellent game of chess and hid the human element. This IS an achievement--- just not the one people wanted. Fraud? Yes
- I once heard a rumor (NOTE- this may not be true, that's why its called a rumor) that Hybrid cars get good gas mileage NOT because of the battery but because in their effort to get good mileage they rethought other things like the aerodynamics and how the gas powers the car. If I buy a hybrid car that gets 45 miles and hour but then find out that it gets this NOT because of the battery, but because of really really good enginnering- was I cheated? My sense is NO since I wanted good gas mileage. I may wonder why I need to replace the battery, or even if I need to. Fraud: I'll say NO but its certainly debatable.
- Someone sells a single-purpose quantum computer to factor numbers and it works REALLY WELL but later it is discovered that it didn't use quantum at all(!)---it instead used a new classical algorithms (e.g., an extension of the Number field Sieve)--- would the buyers consider themselves cheated?
- If the buyers were people who just want to factor really large numbers then perhaps they wouldn't care.
- If the device was meant to fool granting agencies or venture capatilists to fund more quantum, then it is fraud. One may wonder why the device-maker didn't just apply for funding in crypto.
- If the buyer is an academic who then writes an article about how quantum computing is finally practical, when the truth is discovered he may have his credibility (unfairly?) tarnished.
- If the buyers were people who just want to factor really large numbers then perhaps they wouldn't care.
- What if someone had a quantum computer that factored really well but was advertised as a really good classical algorithm that used hard number theory? Somehow that seems very funny to me as a scenario so I won't even ponder fraud or not.
- I have heard that the current quantum computers that do such miraculous things as factor 15 (darling says `factor 15? I could do that without breaking a sweat') or find R(3) (I always thought it was 6 and now I know!) may not be ``really quantum'' . This is problematic since nobody really wants to factor 15 or find R(3)--- that is, there is no analog to the people who want good gas mileage or the people who want to factor large numbers in my two examples above. These devices are JUST for demonstration purposes. If its not quantum, its not demonstrating anything. Fraud? Yes, but are they really fooling anyone?
Thursday, June 13, 2013
The Internship
Last weekend I took my teenage daughters to see The Internship, the Vince Vaughn-Owen Wilson vehicle where they play two forty-year old interns at Google. It basically follows the standard underdog story Vince Vaughn so greatly spoofed in Dodgeball.
We went since most of the Google scenes were filmed at Georgia Tech last summer, with the climatic final meeting filmed in the atrium of the Klaus building that houses the School of Computer Science.
The movie was at best mildly amusing and not too often do you see an Emacs vs Vi discussion in a major motion picture. Mostly the movie played as an homage to Google, what a wonderful magical place it is and all the great things they do for the world. To some extent that worked: Both of my daughters came out of the movie wanting to work at Google.
Larry Page, talking about the movie said "The reason we got involved with the movie ‘The Internship’ is that computer science has a marketing problem. We're the nerdy curmudgeons." I do think CS has a marketing problem, though recently of a very different nature.
The US government is using big data as big brother. The US-China discussions on cyber attacks remind me of the US-USSR talks on nuclear weapons in the 70's. Let's not mention how some people believe computers are destroying jobs and widening the gap between the haves and have-nots.
But of course I remain very bullish on computer science and the great things we can achieve with computing. And sometimes it takes silly movies like The Internship to drive that point home.
We went since most of the Google scenes were filmed at Georgia Tech last summer, with the climatic final meeting filmed in the atrium of the Klaus building that houses the School of Computer Science.
The movie was at best mildly amusing and not too often do you see an Emacs vs Vi discussion in a major motion picture. Mostly the movie played as an homage to Google, what a wonderful magical place it is and all the great things they do for the world. To some extent that worked: Both of my daughters came out of the movie wanting to work at Google.
Larry Page, talking about the movie said "The reason we got involved with the movie ‘The Internship’ is that computer science has a marketing problem. We're the nerdy curmudgeons." I do think CS has a marketing problem, though recently of a very different nature.
The US government is using big data as big brother. The US-China discussions on cyber attacks remind me of the US-USSR talks on nuclear weapons in the 70's. Let's not mention how some people believe computers are destroying jobs and widening the gap between the haves and have-nots.
But of course I remain very bullish on computer science and the great things we can achieve with computing. And sometimes it takes silly movies like The Internship to drive that point home.
Tuesday, June 11, 2013
STOC: Some NON-radical ideas
At the STOC business meeting Joan Feigenbaum (PC chair) raised some very good points. There was no real discussion (or perhaps the burning car was the discussion). Here are the issues and some thoughts as I see them. Note that I am not speaking in any official capacity. I speak of STOC but many of my comments apply to other conferences.
What is the purpose of STOC? Initially it was to help spread knowledge of the latest results, through both talks and lunch. Even though we can now tweet the latest VDW numbers, STOC still serves this purpose. Another (likely unintended) purpose of STOC is to give researchers a quick yet prestigious way to publish. Hiring committees and Tenure committee's DO ask questions like How many STOC/FOCS publications does she have?. Some people think this is an awful system since these papers are not refereed carefully. I am not going to debate that here. My only concern is making STOC better at spreading knowledge.
What are some of the problems with STOC?
So in summary I want to see (1) more workshops, invited talks, and student posters, (2) Full papers in the proceedings, (3) two-tiered program comm. (4) either go to three parallel sessions or have posters. Some of these could be combined-- like a workshop on max flows, and them posters on the max flow papers that got in. The rebut/rewrite I am more ambivalent on but that may also be a good idea. These ideas are NOT radical (and not even original) and it is NOT my purpose to drain STOC of its prestige. Whether that is a good idea is another debate.
What is the purpose of STOC? Initially it was to help spread knowledge of the latest results, through both talks and lunch. Even though we can now tweet the latest VDW numbers, STOC still serves this purpose. Another (likely unintended) purpose of STOC is to give researchers a quick yet prestigious way to publish. Hiring committees and Tenure committee's DO ask questions like How many STOC/FOCS publications does she have?. Some people think this is an awful system since these papers are not refereed carefully. I am not going to debate that here. My only concern is making STOC better at spreading knowledge.
What are some of the problems with STOC?
- People don't want to serve on the program committee since its a lot of time and they can't submit. The two-tiered system used for STOC 2013 seems like a good solution to this.
- Referees Reports (can we even call them that?) are often not very informative. The two-tiered system COULD help this since each committee member has less work and there is a small oversight committee. Another solution that some conferences use is to give the authors a chance to rebut a report and/or rewrite the paper. I'll discuss this more in the next point.
- Since the reviewing process is rushed there have been papers that are just plain WRONG. This can be confusing for someone coming to the literature. Also there are throw-away- comments like This can easily be extended to the case of weighted graphs. where this is not easy at all. How big a problem is this? How much worse than Journals is it? I DON"T KNOW. Would the Rebut/Rewrite help this? PRO: Referees don't have to decide RIGHT NOW what to do and can ask the authors things? CON: More back and fourth, more work. CAVEAT: This might make STOC more like a journal with fast turn-around time.
- Some of the papers never get into Journal Form. Again Rebut/Rewrite may help in that the STOC version is better, but this is more giving in to the problem rather than solving it. Demanding full versions of papers (now possible since with e-proceedings page limits are less of an issue) is a good idea (and I think IS being used now by STOC).
- Many good papers get turned down. Going to three parallel sessions would help this. There may be logistical problems here, but I think this is a good idea. Are there enough good papers to make this work? I think so- and the committee would have the freedom to NOT use all the sessions in case there aren't quite enough papers. I do not think this would make STOC's prestige decline.
- It has been said that only narrow technically hard stuff gets in and not simple short new ideas. Its hard to know if this is really true. But in any case the three-parallel sessions may help this since there would be room for diff types of papers.
- Personally I get more out of the workshops and invited talks then out of the refereed talks. Hence I would like more of those. Posters are good also. More to the point- I would like more VARIETY in whats at a conference since people get knowledge in different ways.
- Can you really communicate your latest and greatest result in a 20 minute time slot in a crowded room where the adjacent bathroom is out of order? Even though we've made great advances in technology (I call PowerPoint PROGRESS but some disagree) and in plumbing (in the old days STOC people had to use an outhouse- do young people even know what an outhouse is anymore?), is there a better way to do this? It was suggested that ALL talks be POSTER sessions (NIPS does this). This should NOT be viewed as inferior or demeaning so long as we still have published proceedings (whatever that means in the days of arXiv) and high standards. The only relevant question is: Would posters be a better way to convey results? I DO NOT KNOW, but I think it would be worth trying out.
So in summary I want to see (1) more workshops, invited talks, and student posters, (2) Full papers in the proceedings, (3) two-tiered program comm. (4) either go to three parallel sessions or have posters. Some of these could be combined-- like a workshop on max flows, and them posters on the max flow papers that got in. The rebut/rewrite I am more ambivalent on but that may also be a good idea. These ideas are NOT radical (and not even original) and it is NOT my purpose to drain STOC of its prestige. Whether that is a good idea is another debate.
Thursday, June 06, 2013
Complexity Typecast
Lance: Welcome to another exciting typecast coming from sunny Stanford University. I'm with Bill at the 28th Conference on Computational Complexity. Hi Bill, I see you're now at Mizzou.
Bill: Yes, my name tag says Univ. of Missouri but the body is still at Maryland. But Missouri is the Show-Me State and I don't believe theorems until you show me the proof.
Lance: Interesting name tags going around. Joshua Brody is at the University of Aarhus in Windsor, Vermont. But outside the name tags, this has been a well-run meeting.
Bill: Indeed. So Lance, anything seem different this year.
Lance: I'm noticing a few trends at both STOC and Complexity. Both have strong attendance this year, especially for West Coast meetings, and more papers than usual. Though fewer women attendees and I'm not sure why.
Bill: I was at a computability meeting at Iowa recently where there six women but they were the same six women from twenty years ago. That does not bode for the future.
Lance: I think that says more about computability as I'm guessing all the attendees were there twenty years ago.
Bill: I resemble that remark. Let's talk math. At one you were a Kolmogorov skeptic, now you are a believer.
Lance: Yes once I actually used it for a theorem I saw the light.
Bill: Speaking of light, are you now a quantum believer? Ronald de Wolf gave an awesome talk on the applications of quantum techniques to classical theorems. Were you convinced?
Lance: There are times that thinking quantumly can help generate good theorems. Nice to see quantum is good for something.
Bill: They didn't have quantum back in the days of the first complexity conference in 1986.
Lance: Yes the world was classical back then, just like the world was flat in 1400. No one here remembers that complexity meeting in 1400 but I have seen a few people from the original 1986 meeting.
Bill: Besides us, Eric Allender, Jonathan Buss, Steve Homer and Osamu Watanabe. Whether we remember anything from those days...
Lance: I remember meeting you for the first time. I was just a first-year grad student and I walked into a room with you and David Barrington talking at light speed. I thought you were both so smart.
Bill: Sorry to disappoint you. So Eric is the last man standing?
Lance: Yes, since I missed complexity last year, Eric Allender is the only person to have attended all 28 Complexity meetings. He does not want that to be his claim to fame.
Bill: Back in '86 I could follow 2/3 of 3/4 of the talks. Now I can follow 1/8 of 1/4 of all the talk. Have I gotten dumber or have the talks gotten harder?
Lance: Yes.
Bill: Thank you Lance, how about you?
Lance: Yes. More the techniques are quite different than the more computability type tools we used back in the day.
Bill: A field must change or die. I'm glad we're changing unlike certain areas of math I will not mention.
Lance: Speaking of change...
Bill: There are 242 ways of changing a dollar into pennies, nickels, dimes and quarters.
Lance: I did not know that! Moving on, what do you think of Joan Feigenbaum's suggestions on changing STOC?
Bill: I'll do a blog post on this later, but I'm generally in favor of more people on the PC (2-tiered), more papers in conference (3 parallel sessions) and more workshops, invited papers and posters.
Lance: So Bill ready to wrap it up?
Bill: Yes, it's time.
Lance: So remember, in a complex world best to keep it simple. And buy my book.
Bill: Yes, my name tag says Univ. of Missouri but the body is still at Maryland. But Missouri is the Show-Me State and I don't believe theorems until you show me the proof.
Lance: Interesting name tags going around. Joshua Brody is at the University of Aarhus in Windsor, Vermont. But outside the name tags, this has been a well-run meeting.
Bill: Indeed. So Lance, anything seem different this year.
Lance: I'm noticing a few trends at both STOC and Complexity. Both have strong attendance this year, especially for West Coast meetings, and more papers than usual. Though fewer women attendees and I'm not sure why.
Bill: I was at a computability meeting at Iowa recently where there six women but they were the same six women from twenty years ago. That does not bode for the future.
Lance: I think that says more about computability as I'm guessing all the attendees were there twenty years ago.
Bill: I resemble that remark. Let's talk math. At one you were a Kolmogorov skeptic, now you are a believer.
Lance: Yes once I actually used it for a theorem I saw the light.
Bill: Speaking of light, are you now a quantum believer? Ronald de Wolf gave an awesome talk on the applications of quantum techniques to classical theorems. Were you convinced?
Lance: There are times that thinking quantumly can help generate good theorems. Nice to see quantum is good for something.
Bill: They didn't have quantum back in the days of the first complexity conference in 1986.
Lance: Yes the world was classical back then, just like the world was flat in 1400. No one here remembers that complexity meeting in 1400 but I have seen a few people from the original 1986 meeting.
Bill: Besides us, Eric Allender, Jonathan Buss, Steve Homer and Osamu Watanabe. Whether we remember anything from those days...
Lance: I remember meeting you for the first time. I was just a first-year grad student and I walked into a room with you and David Barrington talking at light speed. I thought you were both so smart.
Bill: Sorry to disappoint you. So Eric is the last man standing?
Lance: Yes, since I missed complexity last year, Eric Allender is the only person to have attended all 28 Complexity meetings. He does not want that to be his claim to fame.
Bill: Back in '86 I could follow 2/3 of 3/4 of the talks. Now I can follow 1/8 of 1/4 of all the talk. Have I gotten dumber or have the talks gotten harder?
Lance: Yes.
Bill: Thank you Lance, how about you?
Lance: Yes. More the techniques are quite different than the more computability type tools we used back in the day.
Bill: A field must change or die. I'm glad we're changing unlike certain areas of math I will not mention.
Lance: Speaking of change...
Bill: There are 242 ways of changing a dollar into pennies, nickels, dimes and quarters.
Lance: I did not know that! Moving on, what do you think of Joan Feigenbaum's suggestions on changing STOC?
Bill: I'll do a blog post on this later, but I'm generally in favor of more people on the PC (2-tiered), more papers in conference (3 parallel sessions) and more workshops, invited papers and posters.
Lance: So Bill ready to wrap it up?
Bill: Yes, it's time.
Lance: So remember, in a complex world best to keep it simple. And buy my book.
Tuesday, June 04, 2013
STOC is Burning
Bill and I are in Palo Alto this week for the co-located meetings of STOC and Complexity. In a new ACM policy, the STOC 2013 papers are freely downloadable by all for the next month. Check out the best papers and best student papers.
Last night smoke from a burning car preemptively ended the STOC business meeting.
Last night smoke from a burning car preemptively ended the STOC business meeting.
Photo from Moritz Hardt
Before the fire I live tweeted the business meeting. In short, a possible record attendance for a west coast meeting (364), one less than New York last year. Next year's STOC will be in the same hotel in New York. A record number of accepted papes (100). PC chair Joan Feigenbaum talked about her two-tiered committee and several potential experiments for future STOCs (eliminate proceedings and just point to Arxiv papers for instance). Read her blog interview for more.
Lane Hemaspaandra received the SIGACT distinguished service prize for running the SIGACT News complexity column. Gautam Kamath won the STOC 2012 best student presentation award. No award this year because there aren't videos for the talks.
Gary Miller gave the Knuth Prize lecture. He talked about new techniques for solving systems of equations based on graphs that has many applications including new almost linear time algorithms for approximating undirected max flow.
More from Palo Alto later this week.
Thursday, May 30, 2013
The High Quality Research Act
Lots of talk, mostly negative, about the proposed High Quality Research Act.
The NSF bravely said no to Smith's request for the reviews. That was two weeks ago and I haven't seen any new news on the topic. Let's hope that High Quality Research Act just simply disappears.
Prior to making an award of any contract or grant funding for a scientific research project, the Director of the National Science Foundation shall publish a statement on the public website of the Foundation that certifies that the research projectOn the whole, doesn't sound like a bad thing. So why the fuss? Because the bill's sponsor Lamar Smith, republican congressman from Texas and chair of the house science committee, also sent a letter to the NSF acting director asking for the reviews on five grant proposals. So the High Quality Research Act is an attempt to give congressional approval to the grants process and perhaps requiring justification of individual grants. Nothing good can come from that.
(1) is in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;
(2) is the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and
(3) is not duplicative of other research projects being funded by the Foundation or other Federal science agencies.
The NSF bravely said no to Smith's request for the reviews. That was two weeks ago and I haven't seen any new news on the topic. Let's hope that High Quality Research Act just simply disappears.
Tuesday, May 28, 2013
Theory Jobs 2013
Time for the annual spring jobs posts. Like last year, I set up a Google Spreadsheet that everyone can edit so we can crowd source who is going where next year.
A reminder of the rules
Edit
A reminder of the rules
- I set up separate sheets for faculty, industry, postdoc/visitors and students.
- People should be connected to theoretical computer science, broadly defined.
- Only add jobs that you are absolutely sure have been offered and accepted. This is not the place for speculation and rumors.
- You are welcome to add yourself, or people your department has hired.
Edit
Thursday, May 23, 2013
Quantum Computing Fast and Slow
I just read two very different science books, Daniel Kahneman's Thinking, Fast and Slow
and Scott Aaronson's Quantum Computing since Democritus. Not much to connect the two except both deal to some extent about probability and computation and I want to write a blog post for each chapter, for much I disagree with both authors. But that's what makes them so much fun, so rare to find science-oriented books both worth reading that have the guts to say things that one can disagree with.
In full disclosure, Scott and I agree that he would post about my book if I wrote about his but what a deal. Scott's book is a pleasure to read. He weaves the story of logic, computation and quantum computing into a wonderful tour. You can get an idea of Scott's style by how he explains how he will explain quantum.
Kahneman gives a readable tour of behavioral economics with a variety of examples, though I don't agree with his interpretation of many of them. His fast and slow refers to decisions we make instinctively and quickly (like judging a person based on first impressions) versus more slow and deliberative (like multiplying numbers). There is a computer science analogy, in that his fast refers to what we can do with machine learning, simple trained models to make quick judgments that occasionally gets things wrong. I'm not a huge fan of behavioral economics, but it is useful in life to know the probability mistakes people make so you can avoid making them yourself. The wikipedia article has a nice summary of the effects mentioned in the book.
While these two books cover completely different areas, the themes of probability and computation pervade both of them. One simply cannot truly understand physics, economics, psychology and for that matter biology unless one realizes the computational underpinnings of all of them.
In full disclosure, Scott and I agree that he would post about my book if I wrote about his but what a deal. Scott's book is a pleasure to read. He weaves the story of logic, computation and quantum computing into a wonderful tour. You can get an idea of Scott's style by how he explains how he will explain quantum.
The second way to teach quantum mechanics eschews a blow-by-blow account of its discovery, and instead starts directly from the conceptual core - namely, a certain generalization of the laws of probability to allow minu signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want.He approaches the whole book by this philosophy. Every now and then he moves into technical details that are best skipped--either you already know it or will get lost trying to follow. But no problem, the story remains. You need to appreciate Scott's sense of humor and his philosophical tendencies, and he does get way too philosophical near the end, particularly a strange attack on Bayesian that involves God flipping a coin. At the end of the book Scott contemplates whether computer science should have been part of a physics department but after one reads this book the real question is whether physics should be part of a CS department.
Kahneman gives a readable tour of behavioral economics with a variety of examples, though I don't agree with his interpretation of many of them. His fast and slow refers to decisions we make instinctively and quickly (like judging a person based on first impressions) versus more slow and deliberative (like multiplying numbers). There is a computer science analogy, in that his fast refers to what we can do with machine learning, simple trained models to make quick judgments that occasionally gets things wrong. I'm not a huge fan of behavioral economics, but it is useful in life to know the probability mistakes people make so you can avoid making them yourself. The wikipedia article has a nice summary of the effects mentioned in the book.
While these two books cover completely different areas, the themes of probability and computation pervade both of them. One simply cannot truly understand physics, economics, psychology and for that matter biology unless one realizes the computational underpinnings of all of them.
Subscribe to:
Posts (Atom)