Bill Gurley examines risks of U.S. curbs on Chinese AI
Bill Gurley published an essay under the P3 Institute brand that examines open-source AI adoption strategies. It states that U.S. restrictions on Chinese open-weight models risk allowing those systems to become the global default by 2030 across Europe, Asia, Africa, and Latin America. AI researcher Delip Rao addressed related statements from Anthropic, noting that distillation is performed by every major laboratory including U.S. labs.
3) I have put a quote around attacks because the word choice here is interesting. Every lab, including US labs, performs distillation. Yet, it is "attack" if it comes from places we don't like.
2) WTF is this. Chinese labs have been the #1 source of algorithmic multipliers (largely due to their constrained innovation), and yet Anthropic has the audacity to distill this success to their distillation "attacks" and supply chain reinforcement. And imagine erasing brilliant work of hardworking Chinese researchers by referring to it solely as "CCP's AI efforts".
4) What Anthropic is truly afraid of is open source and open weight models eating their lunch. So the correct way to read the sentence is with this edit that discards the CCP boogey man.

3) I have put a quote around attacks because the word choice here is interesting. Every lab, including US labs, performs distillation. Yet, it is "attack" if it comes from places we don't like.
What Anthropic is truly afraid of is this chart Jensen shared at CES -- the relentless march of open models.

4) What Anthropic is truly afraid of is open source and open weight models eating their lunch. So the correct way to read the sentence is with this edit that discards the CCP boogey man.
Aside: To make sense of my "CCP is excellent at engineering" comment, I recommend readers to @danwwang's fantastic book "Breakneck: China's Quest to Engineer the Future", where he demonstrates how China is an "engineering state," contrasting it with the "lawyerly society" of the United States. https://www.goodreads.com/book/show/223736214-breakneck
5) I agree AI systems can automate oppression, but AI capabilities needed for such applications are hardly frontier. A well-motivated actor, in China and elsewhere, can carry that out effectively with existing models, many as old as 2018. State-sponsored oppression and surveillance is an engineering problem and not a frontier intelligence problem, and CCP is excellent at engineering. So this point is moot.
5) I agree AI systems can automate oppression, but AI capabilities needed for such applications are hardly frontier. A well-motivated actor, in China and elsewhere, can carry that out effectively with existing models, many as old as 2018. State-sponsored oppression and surveillance is an engineering problem and not a frontier intelligence problem, and CCP is excellent at engineering. So this point is moot.

What Anthropic is truly afraid of is this chart Jensen shared at CES -- the relentless march of open models.
6) Relative exponential capabilities can be achieved with long-context modeling and harness engineering, both of which are engineering problems. This is important because this is the gateway to "exponentially more capable" models, which are trained from trajectories of such engineering systems. Chinese AI researchers know this fully well and are on that path, but Anthropic cannot make a scary narrative about boring engineering ways to intelligence bootstrapping -- models are abstract but engineering is concrete.

Aside: To make sense of my "CCP is excellent at engineering" comment, I recommend readers to @danwwang's fantastic book "Breakneck: China's Quest to Engineer the Future", where he demonstrates how China is an "engineering state," contrasting it with the "lawyerly society" of the United States. https://www.goodreads.com/book/show/223736214-breakneck
6) This blogpost makes contradictory points depending on whatever it is arguing. One the one hand, it repeatedly calls Chinese AI efforts as a centralized CCP-led effort (see earlier screenshots) to create scare, but in this highlighted section it switches from CCP-led to "PRC labs" and claims they are underfunded. If it was CCP-led, given the strategic value, doesn't the CCP/PLA have all the money in the world to fund whatever these labs need?

6) Relative exponential capabilities can be achieved with long-context modeling and harness engineering, both of which are engineering problems. This is important because this is the gateway to "exponentially more capable" models, which are trained from trajectories of such engineering systems. Chinese AI researchers know this fully well and are on that path, but Anthropic cannot make a scary narrative about boring engineering ways to intelligence bootstrapping -- models are abstract but engineering is concrete.
8) Anthropic is deluded to think there will be a "global" AI stack. That's like positioning for a "global drinking water supply". No country, authoritarian or democratic, in their right mind will sign up for that and give up their sovereignty. Even many individuals or businesses, let alone countries, in free societies shudder that possibility, driving global adoption of open models. Not owning your intelligence is a liability.

7) It is a race and there is a finish line, except the finish line is like hitting escape velocity and race is timing who hits that first. Except that Anthropic wants itself, and only itself, to hit that. Open weight models from the Chinese labs are nipping at their heels, while Anthropic gears itself to IPO, and they obviously do not want that.
7) It is a race and there is a finish line, except the finish line is like hitting escape velocity and race is timing who hits that first. Except that Anthropic wants itself, and only itself, to hit that. Open weight models from the Chinese labs are nipping at their heels, while Anthropic gears itself to IPO, and they obviously do not want that.

6) This blogpost makes contradictory points depending on whatever it is arguing. One the one hand, it repeatedly calls Chinese AI efforts as a centralized CCP-led effort (see earlier screenshots) to create scare, but in this highlighted section it switches from CCP-led to "PRC labs" and claims they are underfunded. If it was CCP-led, given the strategic value, doesn't the CCP/PLA have all the money in the world to fund whatever these labs need?
9) Any writing that wants to keep the "Global South" as "Global South" -- exploited by the "Global North" and subdued -- reveals its true nature. Why would these countries support a hegemonic "global AI stack"? Why would these countries not ally with forces that uplift them?

8) Anthropic is deluded to think there will be a "global" AI stack. That's like positioning for a "global drinking water supply". No country, authoritarian or democratic, in their right mind will sign up for that and give up their sovereignty. Even many individuals or businesses, let alone countries, in free societies shudder that possibility, driving global adoption of open models. Not owning your intelligence is a liability.
From the article:

A new @bgurley blog post! I have been thinking about how sophisticated executives are using open source in super creative ways. Started writing this three years ago. Excited to finish it up and publish it! And with the new @p3institute brand. https://substack.com/home/post/p-197032865?source=queue
on one hand, it's really bad that the american empire is in a death spiral.
on the other hand, this is really good training data
well to be honest not happening yet in europe: general skill issue extends to inference, lots of infra is anchored on azure/aws, just using whatever model is served here.
From the article:
China is building pragmatic open-source AI models and no one can really stop them....
These models can already handle 50% of every day tasks effectively and are 30x cheaper to run
In a few months, they will be able to handle most professional tasks - you don't need a 10T param model to automate work
The US should stop complaining and catch up
The US just cleared NVIDIA to sell H200 chips to China. Exactly the opposite of what Anthropic said to do.
Anthropic: Chinese AI is a threat. They've correctly identified the risks, including cheap Chinese AI capturing American businesses even when it's less capable. But they completely blundered the solution: zero mention of an American open source strategy. In fact, they actively campaign AGAINST open source. 🤦♂️ Full breakdown of their paper from today: