Andrew Trask and Iason Gabriel examine AI governance tensions
Andrew Trask and Iason Gabriel examined tensions in AI governance and scope. Trask noted that efforts to deconcentrate power align with solving large-scale human problems yet such problems prove difficult to address without first consolidating influence. Gabriel observed that AI discourse often treats American conditions as global equivalents, a pattern evident in proposals such as universal basic income. The exchange emphasized AI as a worldwide technology whose effects on most regions receive limited attention.
@IasonGabriel It's also in line with their goal of deconcentration of power. To solve problems for humanity, it's pretty hard to do so without concentrating power around yourself first.
It also allows for better specialization. They almost certainly know more about america's (many) problems than they do about... say... the UK's problem or Cambodian problems or Indonesian problems. It's humble and likely more impactful for them to build an org that focuses on a subset.
@IasonGabriel Anyway, yeah AI's benefits should flow to everyone, but they should be encouraged to pursue their mission
@IasonGabriel It's also in line with their goal of deconcentration of power. To solve problems for humanity, it's pretty hard to do so without concentrating power around yourself first.
@IasonGabriel I might be misunderstanding... but it sounds like... we agree?
Like... are you saying it would be "dangerous and common lacuna in AI discourse" for experts on America's problems to divert their expertise to push solutions in a non-American context?
@iamtrask It’s a dangerous and common lacuna in AI discourse, to think that America is the same as the world. It comes a lot in the context of *universal* basic income. AI is global technology yet its impact on most of the world is severely neglected.
@IasonGabriel "AI is global technology yet its impact on most of the world is severely neglected."
Agree with this as well ofc... and could expand it to also include most technologies.
@IasonGabriel I might be misunderstanding... but it sounds like... we agree? Like... are you saying it would be "dangerous and common lacuna in AI discourse" for experts on America's problems to divert their expertise to push solutions in a non-American context?
Cool. I'm happy to agree with you on both of those points. And tbh, I bet the people (as individuals) do think in global terms.
But they are "acting locally" with this post. It's a DC think tank: - they decided to help a local group of people - they raised money to help that local group - now they're announcing that their org will (in fact) focus on helping that local group
Like... it's a think tank whose purpose is to influence the capital of a specific country... if they took this money and instead announced they were going to help all countries, that wouldn't be acting locally.
@iamtrask In don’t think so… I’m saying that the most vocal AI experts, industry folk, and politically engaged NGOs vastly and disproportionately focus on US impacts, as if this was *the* question. They should be thinking in more global terms – even if they chose to act “locally”.
@iamtrask It’s a dangerous and common lacuna in AI discourse, to think that America is the same as the world.
It comes a lot in the context of *universal* basic income.
AI is global technology yet its impact on most of the world is severely neglected.
@IasonGabriel Anyway, yeah AI's benefits should flow to everyone, but they should be encouraged to pursue their mission
@iamtrask In don’t think so…
I’m saying that the most vocal AI experts, industry folk, and politically engaged NGOs vastly and disproportionately focus on US impacts, as if this was *the* question.
They should be thinking in more global terms – even if they chose to act “locally”.
@IasonGabriel I might be misunderstanding... but it sounds like... we agree? Like... are you saying it would be "dangerous and common lacuna in AI discourse" for experts on America's problems to divert their expertise to push solutions in a non-American context?