Pranav

In a long line of AI-things to be spooked by, ‘Super-agent AIs’ seem the most spooky. What are they? Simply put, they’re AI with autonomy. Instead of telling your AI system every single thing to do, prompt-by-prompt, you can give your AI a broad, general task — with sub-tasks and decisions and calls to make. If things go well, AI systems will soon do all of that for you. If things go badly, they’ll do all of that instead of you.

Sam Altman’s debuting one such system, which he’s talking senators through right now — probably because this is the kind of thing that, if managed poorly, could end up in something like the sewing machine riots.

Yeah, we once shat ourselves because sewing machines could take our jobs. And now, we have robots that are probably smarter than those metallic morons from Star Wars.

Star Wars

What shitty data were these idiots trained on?

Will this work? No idea! If it does, though, people think this will change the world of work forever. Meta, for instance, believes AI can replace its mid-level engineers by 2025.

Emphasis on mid-level. Not entry-level. We’re talking about experienced people. People who have worked for a few years, who guide juniors and review their work, who are trusted to execute complete tasks. That too at Meta — a place known to hire smart folk to work on cutting edge tech. All those people might be gone in one single year. Are you shitting yourself yet?

That said, it took me a while to understand what was actually new, here:

  • A lot of people were freaked out about these models having PhD-levels of intelligence. That isn’t something I really care about, at all. 99% of the work humans do doesn’t require a PhD. All the AI models we get to use are, in comparison to the cutting-edge stuff OpenAI’s hiding, basically village hicks. They’re still far smarter than any human being alive. Surely, at some point, intelligence has diminishing returns?
  • These perform tasks by themselves. But haven’t computers been performing tasks by themselves for a while? Doesn’t the YouTube algorithm autonomously perform tasks without human supervision? Aren’t Swiggy delivery riders or Amazon warehouse workers basically working for an algorithm?

Why are we treating this as a paradigm shift, then?

The answer, I think, lies in how both these things are happening simultaneously. For all of LLM’s intelligence, their results are filtered through human beings. Algorithms do work autonomously, but they’re basically as smart as a highly trained mouse. The magic, here, comes from something as intelligent as an LLM being given autonomy. This creates a difference in kind:

  • Today’s autonomous algorithms have an incredibly small scope for their autonomy. The YouTube algorithm is literally incapable of doing anything apart from suggesting a video — even if it’s really easy. Ask it what 1+1 adds up to, and it’ll still have to dig out a video for you. An LLM, on the other hand, can do a very wide array of tasks.
  • More than just doing more tasks, though, these algorithms can do complex combinations of tasks.
  • See, most jobs I’ve seen are a really complicated collection of very simple tasks. A corporate lawyer, for instance, reads statutes and documents, finds gaps, analyses them, creates suggestions and arguments, writes out text, and talks clients through all of this. Individually, each of these things are incredibly easy to do. From experience, I don’t think there’s any one thing a corporate lawyer does that I was incapable of doing in the fifth grade. But it’s still really hard to be a corporate lawyer, because you need to string all of these tasks strategically. You need to decide what to do, how to approach it, and how it contributes to your larger goals.
  • Combining varied tasks was, until now, a uniquely human thing. There have been many times in history where we created a tool that was better than us at a task. Mechanical looms were better than human weavers. ATMs were better than bank employees. MS Excel was better than the army of clerks that maintained record books. But while some people lost jobs as a result of these changes, humanity was fine, by and large. We still chose when, how and why these tools were used — we retained the meta-function of managing and choosing between tools — and so, they “saved us labour”. We were still important as organisers of tools. But if AI takes that away, it’s hard to see what keeps us important.
  • More generally, in the first couple of years of LLM proliferation, we gave machines something analogous to an ‘understanding’ of the world. It worked far better than we could have hoped. Machines really are capable of ‘understanding,’ it turns out — they simply replace the electrical connections in our brains with mathematical relationships. Now, we’re giving them something analogous to ‘free will’. If this, too, goes better than anyone hoped, welp, we’re in for a wild ride.

Oh man. I think ‘rioter’ sounds like a good job change. Very future-proof.

Bonus: if I pronounce it right, people might hear ‘writer’ when I say ‘rioter’, and maybe they’ll respect my jobless ass.

Another stray thought I have: because we live in this moment, we think of the last fifty-or-so years as a set of discrete technological eras. There was the era of personal computing, the dot-com era, the smartphone era, and now, maybe we’re in an ‘AI era’. In the future, though, I think everything from the late 70’s onwards will be seen as a single, multi-decadal period of transformation. Much like we think of a single “industrial revolution” instead of a thermal revolution, a steel revolution, a textile revolution, and so on. Maybe our entire lives will be spent in a single slow, long-lasting period of flux — the “digital revolution”.


Bhuvan

The Trump pump-and-dump saga has messed up my brain. Consider this for a moment: a former president of the United States is essentially rugging his supporters and fans -- it's ridiculous. The fact that more people aren't outraged is both astounding and a signal of underlying societal malaise. The entire episode has left me deeply unsettled, and the sheer grotesqueness and brazen shamelessness of the behavior is causing neurons in my brain to misfire, rendering me unable to process what just happened.

In my view, a major cause of this malaise is the steady erosion of trust, from the interpersonal to the institutional level. The data supporting this assertion is most readily available [1, 2, 3, 4, 5] in the advanced world, but I'd argue that this trend has hit developing economies as well. In today's online world, there's no such thing as "local vibes." There's no friction to stop vibes from hopping borders - vibes are now truly international.

Several questions have been bothering me about the second and third-order effects of Trump's brazen grift:

  1. Considering he's inspired an entire generation of copycat politicians, what inspiration will they draw from his latest actions, and what new, lower bar will they set?
  2. What does this mean for trust in capital markets? While crypto may have nothing to do with traditional capital markets, surely the malignant vibes of crypto will rub off elsewhere? We've already firmly begun a journey toward the casinofication of everything.
  3. Though the answer is obvious, what does this mean for trust in financial and political institutions?

I've been searching for useful frames to understand this moment, and I've found two brilliant ones so far:

  1. Adam Butler, the CIO at ReSolve Asset Management, tweeted a game-theoretic perspective on Trump's grotesque shenanigans. It's brilliant. In response to the tweet, a user replied, "ChatGPT just told me in simple words: Adopt a grift or be grifted strategy" - and that about sums up the tweet thread.
  2. The second was an article by Rusty Guinn on the Epsilon Theory blog, who uses the metaphor of the Nazgûl from The Lord of the Rings to explain how symbols and narratives can be used to enslave people.

Anurag

Lately, I've been reading up and thinking about what will happen to retail giants like DMarts if and when Quick Commerce takes over. Developed certain thoughts from a consumer behaviour POV that I wanted to share here:

  • It seems like players like DMart have a much more settled business model. They may be able to profitably sustain the low-cost model they've built. Quick commerce, on the other hand, is very new. While the prices currently are very comparable today, I'm fairly guessing that they will slowly have to rise. Yes, there is convenience on offer when it comes to quick commerce, and consumer behaviour could, therefore, change at scale, but it's all very conflicting at large.
  • As shoppers, we might be becoming poor planners over time. The convenience of quick commerce means that we no longer need to maintain shopping lists and can order items as and when we remember them.
  • Technology, in general, is moving towards helping us clear the clutter. For example, Apple Intelligence or Google Photos help clear the clutter in one’s photo gallery by auto-tagging and sorting photos. It gives us the flexibility to continue being lazy and cluttered because technology is de-cluttering it for us. The same could be true for grocery or everyday shopping in the future. If we get lazy, our shopping lists or shopping habits could get cluttered, which only technology can solve. Thus, quick commerce could end up doing well.

But all of it seems to be a battle of one logic being better than the other. The more I read about one side, the more convincing it seems. It's crazy but quite an interesting space to follow right now from a consumer behaviour standpoint.


Tharun

I often hear people around me say, "The rich get richer and stay rich," talking about how wealth gets passed down through generations and creates more inequality. I never thought much about it until I stumbled across this NBER working paper that made me want to dig deeper.

The paper tracked millions of families from 1850 through the Gilded Age up to the 1940s, looking at how wealth moved between generations. I'll be honest - I didn't read the whole thing (who has the time?), so I asked ChatGPT for a summary. Two things I found really interesting:

  1. The top 1% wasn't some exclusive club, there was tons of turnover. More than 70% of the rich actually fell out of the top 1% within just a decade. Being rich didn't mean staying rich. People lost their wealth for all sorts of reasons: living too lavishly, economic crashes, wars, splitting inheritances between kids, or simply having kids who weren't great at managing money.
  2. Even massive wealth didn't last across generations. Here's something shocking - over 90% of grandchildren with a top 1% grandfather couldn't maintain that level of wealth. Even the ultra-wealthy families (top 0.1%) only saw about 13.5% of their grandkids make it to the top 1%. Better odds, sure, but still pretty low.

This got me thinking. Maybe things weren't as rigged as I thought. But what about today? I dug into more recent research (again, with ChatGPT's help) [1, 2, 3] and found something interesting:

Things are really different now. While historically 70% of the wealthy would fall out of the top 1% within a decade, today only 25-30% drop out even over several decades. The rich are much better at staying rich.

That's because today's wealthy have some serious advantages: they make more money from investments than regular income, and they have access to fancy financial tools and tax strategies that help them keep their wealth. While being rich still doesn't guarantee your grandkids will be in the top 1%, their odds are way better than during the Gilded Age.

Looking at all this made me think differently about wealth inequality. It's not just that the rich get richer - it's that our modern system is actually better at helping them stay rich. Kind of explains why we're talking about generational wealth and inequality more than ever.


Krishna

I read about the Stargate Project, and the numbers just blew my mind—$500 billion to build AI data centers in the U.S. That’s not a typo. OpenAI is teaming up with SoftBank and Oracle to kick this off, starting with a massive data center in Texas. They’re putting in $100 billion to start and plan to scale up to 20 data centers across the country by 2029. It’s a huge deal.

The idea is to make the U.S. a leader in AI while creating jobs—hundreds of thousands, apparently. Microsoft and Nvidia are involved too, and they’re even planning to build custom AI chips by 2026. It’s all about having the computing power to keep up with the growing demand for AI.

But it’s not without its challenges. These data centers use a ton of power and water, and people aren’t always thrilled about that. Plus, getting big projects like this approved and built in the U.S. isn’t exactly easy—Sam Altman from OpenAI even said how frustrating the process can be.

Whether AI lives up to all the hype or runs into problems, this project is going to be something to watch.


That's it for today. If you liked this, give us a shout by tagging us on Twitter.