Showing posts with label cs.CC. Show all posts
Showing posts with label cs.CC. Show all posts

Friday, June 01, 2012

Minimum language complexity needed to construct exponential time algorithm

(while this could have been a question on cstheory, I think it's a little too vague - but it's perfect for a blog post!)

I was at dinner with some faculty colleagues (assistant professor beer night - best invention known to mankind!) and we were in the traditional "all our students are terrible" phase of the evening *. The discussion turned to "what programming concepts do students understand after an intro-CS class", which led to the following claim:
"you can't write an exponential time algorithm without knowing recursion"
While this seemed plausible,  I was skeptical. Thankfully my PL colleague (and super blogger) Matt Might was at the table, and he pointed out that the lambda calculus formally has no recursive constructs and yet is Turing-complete, via the magic of the Y-combinator (which essentially computes fixed points directly).

Of course one might argue that a fixed point operator does give you recursion (indeed the Y-combinator is often referred to as anonymous recursion). So my question really is:

Is there a well defined notion of "programming languages without recursion or recursion-like substances" and if so, what is the most expensive algorithm that can be expressed in such a language ?

p.s BDDs seem like an obvious candidate...


1 (other regularly discussed phases include "I can't believe < latest cryptic thing > senior faculty member said", "How many frequent flyer miles do you have" and "chickens!". Beer night is awesome !

Wednesday, March 25, 2009

SIGACT article by Viola

There's an excellent article by Emmanuel Viola in the latest SIGACT News ($$) on the problem of finding functions that have low correlation with low-degree polynomials over GF(2).

p.s this is the kind of thing that one would merely twitter, but then I'd want it to somehow feed into my blog automatically.

Friday, May 16, 2008

The rectangle method in communication complexity

It's tempting to take way an impression that the P vs NC proof is a one-off kind of thing, with no real lessons for other kinds of lower bounds that one might want to prove. Only time will tell whether this is the case or not.

But I was thinking about communication complexity lower bounds recently. Consider the standard 'rectangle' method for proving a communication complexity lower bound. As usual, we have some (Boolean) function f(x,y) whose answer we wish to compute. Alice has x and Bob has y, both n-bit strings, and we want to exchange as few bits as possible in order to compute the answer (since f is Boolean, it doesn't really matter who ends up with the final answer: they can always pass a bit back to the other party).

A natural description of the communication is in the form of a matrix, where each row is labelled with one of Alice's inputs, and each column is labelled with one of Bob's possible inputs. Now let's say Alice sends the first bit. This bit depends only on x, and so we can partition the rows into two blocks, one containing all inputs for which Alice sends a 1, and the other containing all rows for which she sends a 0. As the communication proceeds, we can further partition the space into equivalence classes of sets of inputs that induce the same communication pattern.

So far, we're doing what most lower bound arguments tend to do: finding equivalence classes of inputs prior to doing some kind of counting (think of the sorting lower bound). But here comes a neat twist.
Each equivalence class thus defined forms a rectangle in the matrix.
The proof of this is not too hard, and is a consequence of the fact that Alice and Bob can only see their side of the pair. Let's say one of these equivalence classes contains the pair of inputs and , which means that for both these input pairs, the conversation transcript is identical.

But now what happens if the input was instead ? Alice sees , and sends the same first bit out that she would have sent if she had seen (because of the equivalence). Bob sees this bit, and looks at , and has no way of telling that Alice's input was NOT , and so sends out the next bit in the transcript. Repeating this argument for the remainder of the conversation, it's easy to see that must also generate the same transcript, as must . Geometrically, this proves that every equivalence class must be a rectangle, and therefore, the matrix consists of rectangular patterns of 1s.

In other words, what we've done is come up with a geometric way of describing equivalent classes of inputs. This means directly that things like EQ(x,y) (is 1 if x=y) need n bits of communication, because the corresponding matrix has 1s along the diagonal only, and there's no way of covering these with fewer than monochromatic rectangles. Again, we're using the geometry induced by the computation to make arguments about the lower bound construction. This geometric structure is used algebraically for the more involved rank-based lower bound argument as well.

I'm not claiming that these are identical arguments to what we're seeing in P vs NC. It's that the idea of exploiting the geometry induced by a set of computations is not as unfamiliar as it might seem.

Disqus for The Geomblog