In the New World, computers are task-centric. We are reading email, browsing the web, playing a game, but not all at once. Applications are sandboxed, then moats dug around the sandboxes, and then barbed wire placed around the moats. As a direct result, New World computers do not need virus scanners, their batteries last longer, and they rarely crash, but their users have lost a degree of freedom. New World computers have unprecedented ease of use, and benefit from decades of research into human-computer interaction. They are immediately understandable, fast, stable, and laser-focused on the 80% of the famous 80/20 rule.
It is an interesting post, but I don't believe that the lack of multi-tasking and other freedoms is a necessity for "new world computing". If you can make a slick UI for switching between tasks, then you can also make a slick UI for switching between tasks that continue to run in the background.
These limitations are just engineering trade-offs that had to be made to give us an early peek at ubiquitous computing (which is by the way the real term for "new world computing" and was already a research topic long before I went to college). I seriously doubt the next generation of these devices will have the same limitations.
You can wave your hands and talk about how it's all task oriented now all you want, but in the end multi-tasking is a necessity even if only for running a chat client 24/7. And that's exactly what I do with my old and clunky N95 smartphone.