[go: up one dir, main page]

Wednesday, March 12, 2025

Covid and Complexity

As we hit five years from when the world shut down, lots of discussions on how Covid has changed society. What about academia and computer science?

It's a challenging question to ask as Covid is not the only major change in the last five years. We've seen wars in Ukraine and Gaza and a huge political changes around the world. We've had major technological changes as well, most notably the rise of machine learning, particularly large-language models. 

But Covid changed us quickly, moving immediately to online teaching, meetings and conferences. Both my children move in with us for six-months. Crowded but it was great to spend time together. 

Most of us have moved on from Covid, though a small number still take it seriously, avoiding crowded areas, wearing masks, not eating indoors at restaurants, even still isolating. We need to all respect everyone's individual risk decisions when it comes to the disease. And we should never forget the many we've lost to the disease. 

We saw many attempts at virtual and later hybrid conferences but none worked particularly well, despite some valiant efforts. There's just a limit to how long you can be engaged staring at a screen. The best we have now are recorded talks with a watch party, and a separate in-person meeting. Not just for Covid, but because international travel has become more difficult for many.

By necessity we saw a vast improvement in online collaboration tools, at least until Google killed Jamboard. With papers generally available online, there is very little you can do in your office you can't do at home. So we see less people come into work every day, faculty, students and staff. Collaborating with someone across an ocean is almost as easy as collaborating with someone at your university. I find new faculty feel less need to choose a university for its resources and colleagues than for its location.

Personally I try to avoid virtual meetings as much as I can, which is not nearly enough. I have a student I work with at another university in Chicago who prefers to make the long trek here to talk research than meet on zoom, as we are much more productive that way. Others seem to prefer and even thrive on online meetings. Each to their own. 

The students have suffered the worst from Covid especially those who lost a year or more of in-class pre-college education. We see some incoming students struggling more both from a knowledge background but also a social one, with many just watching their lectures online or not fully engaging if they come in person. It will take a whole generation before we fully recover as a society from that disease.

Sunday, March 09, 2025

Numbers that look prime but aren't

 
A long time ago I made up the question (are questions ever really made up?)

What is the least number that looks prime but isn't?

It was not quite a joke in that it has an answer despite being non-rigorous.

My answer is 91:

Dividing by 2,3,5,11 have TRICKS

Squares are well known.

So the first number that looks prime but isn't is \(7\times 13=91.\)

I recently  saw the question asked on Quora and given a more interesting answer. The answer is by Nathan Nannon, a PhD in Math at Univ of CA, Davis (Graduated 2021). I paraphrase it here and then have some questions.

-----------------

What does it mean to look prime?

FIRST CRITERION: 

1) Its decimal representation ends in 1 , 3 , 7 , or 9 , and

2) It is not on the multiplication tables that a lot of people memorize, which go up to 12.

Based on these criteria, 39 is the first composite number that looks prime.

(If people no longer learn the mult tables then the answer would be 21.) 

SECOND CRITERION: Use the trick that a number is divided by 3 if the sum of the digits is divisible by 3.  Then the first composite number that looks prime is 91 , followed by 119 , 133 , and 143.

THIRD CRITERION: Fermat's Test: If \(p\) is prime then for all \(a\), \(a^p\equiv a \pmod p\).
Numbers that pass this test and yet are composite are called Carmichael numbers.

Here are the first few  Carmichael number:

561 AH- that does not count since 5+6+1 is divisible by 3.

1105 AH-doesn't count, ends with 5.

1729 AH- Nathan Nannon points out that 1729 is the sum of two cubes (more on that later) and hence we can use \(a^3+b^3 = (a+b)(a^2-ab+b^2)\). This only works if you KNOW that 1729 is the sum of two cubes. AH- most mathematicians do know this because (1) it is the least number that can be written as the sum of 2 cubes in 2 diff ways, and (2) this fact has been popularized by the following true story (I quote from Wikipedia, see here) which explains why such numbers are called Taxicab Numbers

The name is derived from a conversation ca. 1919 involving mathematicians G. H. Hardy and Srinivasa Ramanujan. As told by Hardy:

I remember once going to see him [Ramanujan] when he was lying ill at Putney. I had ridden in taxi-cab No. 1729, and remarked that the number seemed to be rather a dull one, and that I hoped it was not an unfavourable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."

Note 

1) Oddly enough, the fact that 1729 is the sum of two cubes in TWO diff ways does not make it any easier to factor. We just needed one way. 

2) To say that 1729  does NOT look prime depends on history as well as math. If not for the Hardy-Ramanujan story, would most mathematicians know that 1729 is the sum of 2 cubes. Possibly since its 1000+729. But not clear. Martians may think 1729 looks prime.


2465 AH-doesn't count, ends with 5

2821 AH- just looking at it, it is clearly divisible by 7.

6601 AH- OKAY, this one really does look prime.

UPSHOT
Depending on what criteria you use, the least number that looks prime but isn't is either 21 OR 39 OR  91 OR  6601 or something else, depending on what looks prime to you.

------------------------------------------------------

QUESTION
Is there some natural and simple criteria that rules out 6601? This may depend on your definitions of natural and simple.

QUESTION The first few Carmichael numbers had small factors. 6601 is divided by 7. Is there some function   f with \(f(n) \ll \sqrt n\) such that if \(n\) is a Carmichael number then it has a factor \(< f(n)\). ?

The next few Carmichael number after 6601 is 8911, which 7 divides. So that looks good. But alas, Jack Chernick proved (see here) that any number of the form \((6k+1)(12k+1)(18k+1)\) where \(6k+1\),\(12k+1\), and \(18k+1\) are all primes, is a Carmichael number. It is not know if this generates infinitely many Carmichael numbers. Hence if some f(n) exists then its probably \(\Omega(n^{1/3})\).


Wednesday, March 05, 2025

Taking a Stand

On February 20th we got the news from the National Science Foundation Algorithms Foundations Team that long-time NSF program director Tracy Kimbrel, was leaving the NSF, and not by choice.

Along with many others in part-time status at NSF, my service has been terminated earlier than I intended or expected.  It has been a great privilege and a great honor to serve the Algorithmic Foundations community over the last decade and a half.  It's disappointing to have it end so abruptly.  I will miss it and all of you.

Tracy is just one of many government employees losing their jobs but when you know someone it feels personal. Tracy has been a fixture at the NSF and often comes to theory conferences to talk about grant opportunities and the state of the NSF. In my yearly pre-covid pilgrimages to the foundation for panels, I always had great conversations with Tracy and watched him work, getting the information he needed from us to make the tough decisions of which projects to fund, always many more worthy than the available funding. The theory community loses with Tracy out of the NSF.

We did get some good news earlier this week with the NSF reinstating most of their probationary employees. And Trump did say near the end of his speech yesterday "we are going to conquer the vast frontiers of science" but apparently we'll do it with a much smaller NSF if Trump follows through with his plans.

Talking with some fellow academics at another university, they had almost given up. "What can we do?". 

We can push back.

Start by doing nothing. Don't preemptively change your policies and your values. Too many universities and organization are abandoning DEI programs, changing their curriculum, freezing hiring of faculty and students, in anticipation of challenges to come. We may see a time that new policies will survive the courts and force us to change, but not yet.

While the democrats in congress seem powerless, many of the governors, including my own governor JB Pritzker, have fought back, mostly in the courts, and have stopped, for now, much of the damage to the NIH and NSF. The computing societies urge congress to protect our research funding, especially in a time when we need to compete technologically with China and other countries. 

As individuals, we can take our own steps, participate in Stand Up for Science on Friday, reach out to our representatives at the national and state level, and just be part of the resistance. We can't let bullies dictate our future, we must control it for ourselves. 

Sunday, March 02, 2025

Karp recently turned 90 but there was no conference to celebrate that. Which numbers do we use and why?

Karp turned 90 in January of 2025. I searched to see if there is a 90th Birthday Conference for him.  I did not find one (is there one?).  For which years do we have celebratory birthday conferences?

Here are some conferences in honor of  60th Birthdays, by alphabetical order of last name. 

Eric Allender here

Laci Babai here

Allan Borodin here

Stephen Cook here

Rod Downey here

Juris Hartmanis (at the 1988 Structures, now Complexity, conference, predating the web). Lance posted about it here.

Russell Impagliazzo here

Ravi Kanan here

Richard Karp (I could not find the link.)

Stuart Kurtz here

Michael Rabin (I could not find the link but I recall planning to go but snow cancelled my flight.)

Ronitt Rubinfeld here

Michael Saks here

Alexander Shen here

Michael Sipser here

Shang-Hua Teng here

Leslie Valiant here (I thought he also had an 80th bday but I am wrong- he is younger than 80.) 

Vijay Vazarani here

Nikolay Vereschagin here

Avi Wigderson here

I am sure there are more. 

Having a conference for someone's 80th birthday is also done. Here are a few:

Richard Stanley here

Michael Rabin here

Stephen Cook here (This was actually a conference for 50 years of Complexity Theory: A Celebration of the work of Stephen Cook. It was 50 years since Cook published what is now called the Cook-Levin Theorem. It also happened to be the year he turned 80.) 

Donald Knuth here but also see Lance's blog post on the meeting   here  and Knuth's favorite bday     here.

 Martin Kruskal here and here for my post on it, and Clyde Kruskal (Martin's son) post on it. That was back in 2007 when I was a guest poster. And Clyde was my guest, so he was a guest-guest poster.

I am sure there are many more. 

Numbers between 60 and 80 are rare (my wife read this and noted that there are 18 of them not including endpoints) but here are some:

John Osborne  (UMCP Math Prof) had a 64th. Could not find a link. 

Dick Dudley 65.  (MIT Math professor, Not to be confused with Big Dick Dudley who was a wrestler or Underwood Dudley who wrote a book on Math Cranks, see here, which I was amazed to find out I DID NOT write a review of.

Mike Patterson here (Why 66? Why not 66!)

Harry Lewis had a 70th, see here (I asked WHY 70? He said the organizer, Margo Seltzer, wanted it then. That is another point- the organizer really controls which year and also if it happens at all.) 

Mihalis Yannakakis had a 70th here 

Ravi Kanan 70 here

Leonid Levin had a 75th, see here

Dick Lipton has a 75th, see here

Manuel Blum had a 77th since 77=7*11 is a Blum Integer. ( The only reason I know it exists is because Lance went to it.) 

I've seen 100th bday conferences. 

Turing here (This is one of many Turing Celebrations for his 100th. It was in 2012. Turing died in 1954.)

Erdos here (This was in 2012. Erdos died in 1996)

Chernoff here (He attended. He is still alive as of this writing, at the age of 101) 

Kolmogorov here (The Day K turned 100 a student told me this. I then gave a lecture on Kolm complexity instead of the planned topic, on the fly. Now that my course is all on slides, and some classrooms don't even have  a blackboard or whiteboard, I can't do that anymore. Oh well.)

I am sure there are more. 

1) Why are 60, 80, 100 the usual numbers? They are nice and round. And 60 is big enough so that the person being celebrated has done stuff, but not so big that they are dead. 

2) There should be more clever ones like Osborn (64) and Blum (77). If there was ever a conference in my honor that would be hard, since the number most associated to me is 5/12 (see here). I had not done much in math at the age of 5 months. Oh well.

Wednesday, February 26, 2025

You Need Much Less Memory than Time

Just as I was complaining that we haven't seen many surprising breakthroughs in complexity recently, we get an earthquake of a result to start the year, showing that all algorithms can be simulated using considerable less memory than the time of the original algorithm. You can reuse space (memory) but you can't reuse time, and this new result from Ryan Williams in an upcoming STOC paper provides the first stark difference.

DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(\sqrt{t(n)\log t(n)}\))

This is a vast improvement on the previous best known simulation, the classic 1977 Hopcroft-Paul-Valiant paper showing

DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(t(n)/\log t(n)\))

only slightly lower than the trivial \(t(n)\) bound. Williams gets a huge near quadratic improvement that will go down as a true classic complexity theorem. Note that the space simulation does not maintain the time bound.

Williams' proof relies on a space-efficient tree evaluation algorithm by James Cook and Ian Mertz from last year's STOC conference. Cook and Mertz's algorithm builds on earlier work on catalytic computing, highlighted in a recent Quanta article

Let me give an highly overly simplified view of the combined proof.

A \(t(n)\) time Turing machine uses at most that much space on its tapes. Split the tapes into \(\sqrt{t(n)}\) segments of size \(\sqrt{t(n)}\). Using the fact that it takes \(\sqrt{t(n)}\) time to cross an entire segment, Williams with some clever tricks models acceptance of the Turing machines as a circuit of bounded degree and depth \(\sqrt{t(n)}\), where the wires carry the contents of the size \(\sqrt{t(n)}\) segments at various times in the computation. 

Williams then applies the tree evaluation algorithm of Cook and Mertz. Cook and Mertz use finite fields to encode these segments as a combination of registers of size \(\log t(n)\) and show how to compute the value of each node of the tree using only \(\sqrt{t(n)}\) space for the local computation plus needing to only remember a constant number of registers while reusing the rest of the space when recursively computing the tree. It's pretty magical how they manage to make it all work. 

It's worth going through the proof yourself. I recommend Sections 3.1 and Footnote 6 in Williams' paper (a slightly weaker space bound but much simpler) and Sections 2-4 of the Cook-Mertz paper. Oded Goldreich has an alternative exposition of the Cook-Mertz algorithm and proof.

Williams' theorem works for multitape Turing machines and oblivious random-access machines, where the queries to the memory are fixed in advance. He shows how to use this result to compute the output a circuit of size \(s\) using nearly \(\sqrt{s}\) space. Fully general random access machines remains open, as does nondeterministic and other models of computation (random, quantum, etc).

In 1986 my advisor Mike Sipser gave the first hardness vs randomness result, showing roughly that if there were problems that took time \(2^n\) but could not be solved in space \(2^{.99n}\) on multi-tape Turing machines then RP = P. Williams' theorem kills this assumption though we've developed weaker assumptions since. 

Moving forward, can we push Williams' result to get a simulation in space \(n^\epsilon\) for \(\epsilon<1/2\). A simulation for all \(\epsilon>0\) would separate P from PSPACE. Even a slight improvement would have applications for alternating time. Maybe try to use the Cook-Mertz techniques directly in the Turing machine simulation instead of going through computation trees.

Read sections 4 and 5 of Williams' paper for some further consequences and challenges for further improvements. 

Sunday, February 23, 2025

Why my department hopes I do not die this spring

Alice is scheduled to teach X in the Spring.

Then Alice CAN'T! (illness, death, or some other reason)

What is the department to do?

1) If it's an undergraduate class then likely there are other people who are QUALIFIED. Perhaps a grad student, perhaps a postdoc, perhaps someone borrowed from another dept (Math for a theory course for example), perhaps a teacher of another section of the course, perhaps a retired teacher in the area. Might be rough because of the short notice. In Math this is easier since the courses are more standardized.

2) If it's a graduate class then either

a) It's still something that someone else could teach. The set of people is smaller but might be nonempty. 

b) Nobody else can teach it. Still, it's a graduate class, so it's small, so it can be cancelled and the students can probably find other courses to take.

There is another positive aspect to this negative situation: If nobody else can teach it then probably the entire department is small, so even more likely that the class is small.

If I die this semester then the department will be faced with the perfect storm:

a) I am teaching a graduate course that would be a lot of work for someone else to get up to speed:Ramsey Theory. (I do have good slides, so that might help.)

b) There are around 30 students taking it. (There are around 30 in most of the graduate courses, and more in the AI graduate courses.)

SO, what would they do? 

1) Cancel it. There are a few other theory grad courses the students could take, and some might want to take a non-theory course. Still awkward since those courses are already fairly full.

2) Have someone teach a similar course, like Prob method (the only part of Ramsey Theory that Lance thinks is worthwhile, see here) or combinatorics. This would be a lot of work so it may be hard to find someone who BOTH is qualified AND wants to. Perhaps a grad student, though I think we try to avoid having grad students teach grad courses. Then again, desperate times call for desperate measures. Perhaps a former grad student who is still in the area geographically and mathematically. 

I've been talking about the teacher being unable to teach the course BEFORE it begins. What if the teacher becomes unavailable DURING the semester? That's even harder to deal with.  

OKAY, the above has all been abstract and the events portrayed are rare. Here are things I have SEEN happen and what was done

1) Someone hired as a lecturer to start in the spring ends up being unable to come for the spring. This was found out in November. They were going to teach 2 ugrad courses. 2 profs did an overload.

2) Someone who was going to teach a grad course ended up being unable to do so. This was found out in December. That teacher really did have a former grad student in the area who was available and qualified. Lucky!

3) In December a teacher ends up being unable to teach with 2 lectures to go, and the final to be administered and graded. Another teacher (who has taught that course) takes over, but this is not a big deal since its not much work to finish the course. 

4) A teacher knows ahead-of-time that they won't be able to teach for 4 weeks. Two other teachers agree to do the course in that teachers absence. 

None of  the above are ideal, but solutions were found that did work (for some definition of worked) But I do wonder if there will come a time when no solution can be found.

One piece of advice: If you are not going to be able to fulfill your commitment to teach a course, let your chairman know with a lot of lead time so they can find a solution. 


Wednesday, February 19, 2025

Tomorrow and Yesterday

I recently completed Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin, a book recommended by many including the City of Chicago. The novel covers the decades long journey of two game developers, Sadie and Sam, and how their lives interact with the games they create.

A paragraph towards the end made me rethink the whole book (not a spoiler):

Well, if we’d been born a little bit earlier, we wouldn’t have been able to make our games so easily. Access to computers would have been harder. We would have been part of the generation who was putting floppy disks in Ziploc bags and driving the games to stores. And if we’d been born a little bit later, there would have been even greater access to the internet and certain tools, but honestly, the games got so much more complicated; the industry got so professional. We couldn’t have done as much as we did on our own.

This paragraph hearkens back to my post last week, about how the era you grew up in can affect your trajectory. But also I'm a generation older than the book's main characters, and indeed Ribbit was distributed on a floppy disk in a Ziploc bag.

The novel at its heart is about two friends making games. I was lucky to have that experience myself for a couple of years in the early 80s, with high school friend Chris Eisnaugle, working on Ribbit, Excalibur and some other games that never saw the light of day. We coded for days on end while listening to music like REO Speedwagon, and taking time off for bowling or watching early Schwarzenegger movies. Coding in assembly language on slow processors with limited graphics, taking advantage of our complementary strengths and making it work. I don't regret leaving that life behind for the theoretical wonders of computational complexity, but that doesn't mean I don't miss it.

Sunday, February 16, 2025

Big Breakthrough in the exciting world of sum-free sets!

 Let \([n]\) be the set \(\{1,\ldots,n\}\). (This is standard.)

Let X be a set of integers. \(X\) is sum-free if there is NO  \(x,y,z\in X\) such that \(x+y=z\). (Note that \(x=y\) is allowed.)

Lets try to find a large sum-free set of \([n]\). One obvious candidate is 

\(\{1,3^1, 3^2,\ldots,3^{\log_3 n} \}\) (assume \(n\) is a power of 3). 

So we can get a \(\log_3 n\) sized sum free set of \([n]\). Can we do better?

YES: 

Erdos showed that every set of \(n\) reals has a sum-free subset of size \(n/3\). I have a slightly weaker version of that on slides here.

That result lead to many questions:

a) Find \(f(n)\) that goes to infinity  such that every set of \(n\) reals has a sum-free subset of size \( n/3 + f(n)\).  

b) Replace reals with other sets closed under addition: naturals, integers, some groups of interest.

c) Just care about \([n]\).

Tao and Vu had a survey in 2016 of sum-free results, see here.

We mention three results from their survey

(0) Eberhard, Green, and Manners showed that the 1/3 cannot be improved (see the survey for a more formal statement of this). So, for example, nobody will ever be able to replace 1/3 with 1/2.9.

(1) Alon and Kleitman  modified the prob argument of Erdos (in the slides) and were able to replace \( n/3\) with   \( (n+1)/3\).

(2) Bourgain showed, with a sophisticated argument,that \(n/3\) could be replaced by \((n+2)/3\)

I believe that result (2) was the best until a recent paper of Ben Bedert (see here). Did he manage to 

push it to \((n+100)/3\)? No. He did much better:

For every set of \(n\) reals there is a sum-free  subset of size \(n/3 + c\log\log n\).

Reflections

1) This is a real improvement!

2) Will it stay at \(n/3 + c\log\log n\) or will there NOW be an avalanche of results?

3) Contrast: 

Erdos's proof  I can (and have) taught to my High School Ramsey Gang (note: you DO NOT want to mess with them).

 The proof by Ben Bedert is 36 pages of dense math.

4) This leads to the obvious open question: Is there an elementary proof of some bound of the form \(n/3 + f(n)\) where \(f(n)\) goes to infinity.