
Sign up to save your podcasts
Or


Most AI policy conversations still orbit around Washington and Brussels, but Asia-Pacific is already writing a very different rulebook. In this episode, I talk with George Chen, Digital Partner at The Asia Group and former Meta policy executive, about how AI is actually being governed, built, and deployed across APAC, China, and the global south.
George traces his own path from journalism to big tech to advisory work, and uses that vantage point to explain why APAC is not “one market”—and why the EU analogy breaks down almost immediately. Countries like Japan, Korea, Singapore, and China are leaning into AI as a tool for economic recovery and industrial upgrading, often taking a much more pro-innovation, pro-growth stance than the EU’s more precautionary approach. At the same time, Southeast Asia is becoming the physical backbone of the AI build-out: Singapore as HQ and regulatory hub, with Malaysia, Indonesia, Thailand, and the Philippines hosting the data centers, power, and connectivity—along with all the local tensions that come with that.
We also get into what “responsible AI” actually looks like inside a company. Beyond the buzzwords, George breaks it down to three pillars—security, safety, and privacy—and talks through how mature players like Microsoft or Meta build these into product design from day one, versus the reality for startups trying to ship fast with one lawyer and a single policy person supporting multiple markets. He also makes the case that fragmented regulation and the lack of international standards are becoming a real tax on innovation, especially outside the US and EU.
Another big thread is the emerging US–China competition over AI governance itself. It’s no longer just about who has the best models or chips; it’s also about who exports their rules, norms, and defaults to the rest of the world. The US is pushing an “America-first” innovation and safety model to allies, while China is pitching AI as a kind of public good to the global south—combined with a more cost-efficient, top-down deployment model and very strict cyber and real-name rules at home. George argues this divergence is already shaping how content, deepfakes, and AI-generated media are treated in different jurisdictions.
We talk about the local edge of Chinese models—why in places like Beijing, models such as DeepSeek can be more useful than ChatGPT or Gemini for everyday queries because they’re trained on more localized, timely data. From there, we zoom out into the new AI talent map: countries like Indonesia, Vietnam, Kazakhstan, and Uzbekistan trying to position themselves as low-cost AI talent hubs and “back offices” for global AI companies as coding gives way to prompting and applied ML.
We close on a more philosophical note: should AI be built as a subordinate assistant or a true partner? George shares his uncertainty here, and we talk about what happens when we give AI more agency, emotional intelligence, and continuous workloads. At some point, the conversation shifts from safety checklists to ethics, culture, and even “digital colonialism”: whose values, whose norms, and whose worldview are encoded into the systems that end up mediating how we see the world.
In today’s world, there’s no shortage of information. Knowledge is abundant, perspectives are everywhere. But true insight doesn’t come from access alone—it comes from differentiated understanding. It’s the ability to piece together scattered signals, cut through the noise and clutter, and form a clear, original perspective on a situation, a trend, a business, or a person. That’s what makes understanding powerful.
Every episode, I bring in a guest with a unique point of view on a critical matter, phenomenon, or business trend—someone who can help us see things differently.
For more information on the podcast series, see here.
AI-generated transcript.
Grace Shao (00:00)
Hey George, thank you so much for joining us today. I’ve been really excited and waiting for this chat. You know, you are a very busy man. You’re constantly traveling. I can barely reach you in Hong Kong. So really appreciate your time today. Sit down with me and share your insights with my followers and some of our listeners. To start with, you’ve worn many, many hats. A journalist, tech executive, policy advisor, and now a partner at the Asia Group where you advise a lot of force, you’re probably helping companies on, I believe, geopolitical positioning, right?
George Chen (00:29)
Thank you. First of all, thanks for the invite. It’s quite an honor to join a growing cohort of guests for your program. Really happy to have a discussion about tech and policy issues because I think you’re right. My first 10 years in media, similar to your background, and most recent decade, I work very much on the intersection between technology and policy.
My biggest takeaway from my last job at Meta, one of the platform operators in the world, is sometimes we very much focus on technology development, like the breakthrough, while the resources for policy support are actually quite limited, especially in the Asia-Pacific region compared with the US. think for all the...
Big tech in the US, given the politics domestically, they have to do a lot on political and policy part. But for Asia Pacific, the policy work, compared with other investments, like in data center, technology, hiring of engineers, it’s still very, very, very understaffed, under-resourced, and sometimes under-appreciated. This is why we need to...
address some concerns about policy issues as we advance the technological part. Because I always tell my students, tell my friends, tell my partners that the key challenge, even you have CharGBT 5.0 or 6.0, the key challenge is how to get the government to understand new technologies and also get the users to have more trust in those new technologies. Otherwise, nobody use it, nobody trust those things. And that makes them.
Grace Shao (02:15)
I think that’s super helpful. A lot of times when we think about policy or safety issues, we think about it as like a siloed part of the ecosystem. But really like exactly to your point, like, you know, we need the developers to understand the concerns of the users. We need the users to understand the safety risks of the products. We need the regulators to understand what it means to implement these like technology throughout our economy, right? So there’s it’s like, it’s actually all interrelated.
I think today to start off with, let’s like go into big tech, just give in your background with Metta, working with a lot of these big tech companies. You’re based in Hong Kong for the listeners, but actually work predominantly for American big tech companies. What is like the, I guess, the fundamental feel right now as we see the evolution be from a social media company for AI to AI of focused company as this is now the forefront of their strategy.
George Chen (03:11)
Right, so for the Asia-Pacific region, it’s big. I always try to explain to my clients and friends, when people talk about Asia-Pacific, the first gross perception, perhaps from Western perspective, is, okay, treat Asia-Pacific like the EU, right? But EU is a single market. They have very much shared the language, English, also one currency and they have the European Parliament to pass legislation for EU member countries. Asia-Pacific is far diverse, far different, and much bigger. So it’s hard to just copy whatever works in EU and then let’s also do it in APAC. Using AI regulations as a clear and classic example, you know, you is the first You know government, you know to have the world’s first AI act, right? But the so-called the Brussels effect didn’t really happen this time in Asia Pacific countries You didn’t see like all the countries, you know, like Singapore or you know Japan to quickly follow up on You know to have a similar like a risk-based approach or penalty focused approach to AI, right? Instead, you know if you look at Japan. They are very much welcoming. Japan declared to be, they want to be the most friendly open country for AI developments. The first data exception for AI testing was actually in Japan. And then Singapore followed, and Hong Kong’s also not considering, right? So APAC took a very different regulatory approach to AI versus EU. I think this is something all the American tech companies have to realize. It’s not like America leads technology and then EU matters because of the special relationship between US and EU. So as I mentioned at the beginning, the resources for public policy work are very limited in AIPAC, but EU still enjoy a lot of resources, this English-speaking market that has lot of political connections. And then Asia-Pacific, when it comes to policy enforcement, like policy support it feels more like a third country, overall speaking Asia-Pacific as a whole. So there’s still a lot of educational process, the learning curve for big tech, largely from the US to understand what are the challenges, what are the opportunities in the Asia-Pacific market. However, I also need to highlight for many big platforms, Asia-Pacific is actually not just the largest market by internet users for American tech companies, for almost for all of them, right? You know, in terms of user base. It is also a very important revenue source, know, the source of revenue for those American companies. So now you see the imbalance, right? You you make a lot of money from Asia-Pacific, but the support you give to Asia-Pacific is quite limited, know, compared to in the US ⁓ and EU. So the learning curve is there.
American tech companies want to have a more sustainable development and want to have a more constructive relationship, sort of a more constructive partnership with Asian governments. I think there’s still a lot of work to do.
Grace Shao (06:31)
I think that’s really helpful to help listeners understand because sometimes people also approach me, they’re like, what’s APAC? I’m like, APAC is gazillion different markets and it’s actually so fragmented, right? And I think people sometimes misunderstand it kind of similar to like what you said. They think it’s like a EU. It’s not like actually there’s no consistency in currency. There’s no consistency language or no consistency actually even income or anything. So it’s quite scattered. that sense, I actually want to ask you, you mentioned something just now.
George Chen (06:39)
That’s right.
Grace Shao (06:58)
Japan and Korea this time is taking a more proactive actually approach as the countries themselves are taking more proactive approaches to really embracing AI and you know actually compared to EU’s more wait and see or more protective measures right which is not very yeah not not not what they usually would do what do think the trade-offs are actually in that sense do you actually think that means we are seeing more innovation or more technological breakthroughs or even economic diffusion of the technology right now in Japan and Korea.
George Chen (07:30)
Yeah, yeah, let me put it this way. So AI technology, you know, we believe, you know, still in the very early stage, right? Even you talking about, you know, trying to redefine Polisero, you know, but, you know, if you put that in the overall development for AGI, you know, we are still very much under the, in the early stage of the curve. So for Asia Pacific region, yes, it’s diverse, you know, but we can still see some sort of patterns, similarities in terms of different AI strategies. At the Asia group, my firm, we did a research paper on the different regulatory approaches to AI governance in the vast Asia-Pacific region, from Australia to even in Mongolian. Long story short, you are right. Some countries in Asia-Pacific take ⁓ a more economic benefit focused approach, right? Take a more innovation focused approach. Countries like Japan, Korea, Singapore, they want to see how AI can help them to drive economic impact, right? It doesn’t mean like they don’t care about the safety, the security issues, but they want to have certain flexibility, to encourage more startups to succeed, right?
in to a certain degree, actually maybe too many surprising because China is very well known as one of the strictest internet market in the world. Basically, none of the American, very few, I will say, like very few American tech companies can really succeed in China. The only two exception in my mind are like Tesla and Apple. But they are more like consumer related if you touch on content.
We talk about Google and Meta, that’s a completely different story. But even so, China at this time is also taking a more pro-innovation, pro-economy approach to AI development because this is a very top-down approach because President Xi saw the success of DeepSeek and he basically wanted more success stories like DeepSeek. Japan and Korea are in more or less the same category, like pro-innovation, pro-economic recovery. For Japan,
I talked with my friends and colleagues in Japan. The sentiment in Japan is like, we’ve lost 30 years, guess, three decades in terms of economic recovery. This is like our last chart. And Japan has been quite strong in robotics, those fundamental technology development. So that’s the sentiment in Japan. We have to grab the AI opportunity. In EU, have to say, part of the reason why EU is so keen to develop regulations, legislation in recent like five to 10 years. In my view, some may argue and disagree. I think the EU does come with a sense of protectionism, right? Because if you look at all the market leaders, you name it, OpenAI, Google, Microsoft, AWS, all of them are big tech from America, right?
I remember there was a chart to list the top 10 most advanced AI models. There’s only one model from EU, actually from France. The rest are from the US and China. So that tells a lot. If you are the EU regulators, look at from a competition perspective, you will more or less have a sense of anxiety. And then you will look at all those big tags like, no, we need to do something, like a country that pays in the name of safety and security. I’m not blaming EU regulators for doing it. But in the meantime, we also hear more and more concerns, even from the state heads, like French President Macron. He’s concerned that tough regulation in EU on AI will harm innovation in the EU rather than help European startups.
Grace Shao (11:14)
I think we can double click on China later. It’s going to have its own special segment for sure. China is just such a big story. But for some context for lot of listeners, Meta and Google, the likes of these companies actually do exist in mainland, but they mostly only have their ad services there. So basically they help enterprises with their ad sales to the West. But to Georgia’s point, they’re not really operating at the full capacity that you would see them elsewhere in the world.
George Chen (11:33)
That’s right.
Grace Shao (11:39)
Now I do want to kind of finish up on the APAC kind of narrative and then the APAC focus right now, which is for ASEAN right now. Let’s set apart like South Korea and Japan and China, just the Northeast Asian countries are frankly economically much more, you know, like developed as well as more economically focused, right? For ASEAN right now, especially since I just went to Singapore last week, it’s really interesting. Like we basically have the players, like you said, OpenAI, Google, Meta, all of these. Well, APAC headquarters based in Singapore, even the 10 cents and the bite dance of the world, right? However, Singapore is tiny, like just in terms of size and its resources. So what we’re seeing is they’re extracting essentially all the compute energy data centers, connectivity, any of the infrastructure you need to think of actually to Malaysia, in Malaysia, in Indonesia, in Thailand, even they’re building them out over there. How do we actually understand this right now? Is this a net benefit for these economies? Or is it actually really hurting the local economies and, you know, in some ways exploiting them and really just only serving the companies based out in Singapore? How do we understand that?
George Chen (12:45)
That’s right. So let’s talk about Southeast Asia. It’s complicated. When we’re talking about APEC, actually the most complicated part, I think it’s like Southeast Asia. Because when we talk about Korea, Japan, China, even China is a socialist country, but in terms of economic models, there’s a lot of elements related to capitalism. So those are the most economic economics in the Northeast Asia. Southeast Asia is very diverse, very different from each other.
Singapore is like the exception, the most advanced economy in South East Asia. But they come in terms of population, the user base is pretty small, like 4 million, 5 million population, even smaller than Hong Kong. You’re right, a lot of the tech companies, even before AI become a trend, they talk about like Meta, Google, Apple, they all had their headquarters in Singapore. It has really become the hub for big tech over the past 10 decades. Unfortunately, Hong Kong, thank God that we still have big banks like JP Morgan, Goldman Sachs in Hong Kong, we remain as a financial center. But in the aspect of tech innovation, you have to give some respect to Singapore. They did very well to attract those tech headquarters. So this also became, you are right, sort of a point of
I don’t know how to describe it. Some of the neighboring countries are jealous, certainly jealous of the success in Singapore, right? And then countries like Indonesia or Malaysia also wondering like how to get the benefits from the fact that all the big tech have their headquarters, regional headquarters in Singapore, right? But if they only care about the relationship with Singapore or in government, because they have headquarters in Singapore and their neighboring countries will not get any benefits, Malaysia actually founded their own ways in the regional AI race. And their offer is data center because of the stable supplies of electricity, relatively much cheaper labor costs and land costs and overall cost for data center operations. So this is why Malaysia got a lot of attention from Big Tech too, like AWS, Microsoft, they all made huge investments in Malaysia. Not AI, R &D, maybe yet, but first our data center. In the AI industry, we have a popular saying that AI is like electricity. Sam Altman said that. Basically, this is like the new kind of utilities for everyone’s life, right? But to develop AI you also need electricity. You need a lot of investments in infrastructure. This is why Malaysian already stand out and Philippines too in a way, as sort of the cheap, reliable alternative to data center investments in addition to Singapore. Everybody complains about Singapore in terms of living costs, even like how difficult it is to get work committed in Singapore these days. Even you have a call like qualify the job, but it doesn’t mean like you will get work permit immediately. Actually, in comparison, Hong Kong is doing quite well to attract the talents more easily these days in the tech and financial space. Back to the AI governance issue, yes, Southeast Asia also took a very different approach in comparison with Korea and Japan. think Singapore is an exception. Otherwise, if you look at the countries like Indonesia, if you look at countries like Vietnam, they still take a very more security-focused ⁓ approach, especially Vietnam, given their political system, right? So, Meta used to, and I think Meta still have a lot of problems in Vietnam. One of the key issues is about content and moderation, right? There’s a lot of human rights and similar struggles, Thailand too. So those countries, I feel like the sort of the older problems from the social media era was not really solved yet.
And those problems will be brought into the AGI era. And when the government look at the AI, their first question is, okay, so how can I prevent people from using AI to cause any unnecessary trouble, which means like a social instability, right? So that will be the same older problems facing big tech companies. And that tells you a lot when those countries look at AI, they still come from very much a security focused on mindset.
Grace Shao (17:09)
That’s really fascinating because actually when we were just talking about the infrastructure build out on my end, really just I’ve done some research and writing on, you know, the Johor build out and the over capacity with data centers right now. And the unfortunate cause that’s just like the local infrastructure is not able to actually support the the rampant build out. It’s actually affecting the livelihood of people. Right. But your point is really interesting. I didn’t really think about it that way. It’s actually for the from the perspective of these big tech actually.
It’s to prevent bad actors using their technology to actually propel even further, like, you know, bad, intentional, harmful content, right? And then essentially, like you said, cause social unrest that would really be very troublesome for the local government. So I guess from the policy perspective, from the social media era. But what would be something different? What would be something that, you know, big tech will have to start thinking about that they didn’t even have to worry about before.
George Chen (18:03)
Right. Well, you know, as I said, know, lot of the older problems from social media era will remain in the AI era, such as misinformation, know, skin, political speech, you know, especially for countries like Vietnam and Thailand, the real content. It’s always, you know, when I worked at the Meta, you know, those South Eastern countries are always considered as like a high risk countries, you know, when it comes to content policy risks, right?
On the other side, Vietnam, Thailand, Indonesia are much bigger. They are also smart. They also consider AI as opportunity. So they are thinking, how can I use AI to train the next generation of talents, digital talents? Those countries also have relatively younger demographics. So there’s a lot of smart kids who can get on AI and then to learn. So I think that also posed the opportunity for partnership for those American tech companies. Can we do some training program for the purpose to grow the next generation of AI talents in those countries? I think those governments will be very much welcome those initiatives. And this is not just happening in Southeast Asia. You may know somehow I also have my exchange of career experience in Central Asia. I can tell you even countries like Kazakhstan and Uzbekistan trying to focus on talented developments. Because they believe, if you think about learning how to code 10 years ago, this is actually quite an expensive experiment. You need to get professional tutors. You need to get long hours to learn one language. When I grew up, I learned like, I don’t know if you know, we started with Microsoft, the DOS system, and then C12, no one talking about it.
So it took like a year to just get a basic sense of those languages, right? But like AI, you don’t need to learn ⁓ the code. It’s more important for you to understand how to write a proper talk. So those countries like Uzbekistan and Kazakhstan also catch up trying to be the back office for big tech to train, to grow the basic, like the junior engineers. So hopefully they can get some basic work done in those countries for cheap labor cost reasons rather than you need to hire all those engineers in Silicon Valley. And I think that posed the same sort of opportunities for Indonesia, Vietnam, Thailand and other Asian countries.
Grace Shao (20:30)
That’s really interesting. So there’s like a reshuffling of talent and then also like just the talent strategy is actually changing from the social media era or just like the big tech era. I want to kind of look at responsible AI. So we hear the phrase a lot, right? Responsible AI, AI safety from your experience right now.
What does responsible AI actually look like inside of a company and what changes in org charts, KPIs, or decision making when we’re talking about responsible AI? What are the metrics we must track?
George Chen (20:59)
That’s right. Okay, so you mentioned that I wear a lot of hats. You I don’t want to speak like a professor, but I do teach a course at the University of Hong Kong and the Tsinghua University. My course is about digital society and governance. One of the lectures is actually about AI governance for corporates. So responsible AI is a term, you know, very popular, not just in the tech industry, but you now hear more and more just in business in general, right?
It’s, in my view, responsible AI is something like the privacy statement, right? You know, for different companies, you know, when you go to a website now, like the privacy statement already become like the very normal thing, right? You know, when you use a service, you know, have to get, you know, they have to get the user consent first, and they need to tell you, you know, what kind of data they’re collecting for what purpose. That’s the privacy statement. Every website, you will find, you know, a privacy statement. Responsible AI is similar.
So the government is doing their job ⁓ from regulatory perspective, from self regulatory perspective, the government work with NGOs and associations to have an industry code. But for corporates, responsible AI is kind of like the business led principles. I want to use Microsoft as a perfect example. I think Microsoft is leading the way how business can take a more responsible, sustainable approach to AI. Microsoft is responsible for AI. They call it the trustworthy AI, but it’s just the name change, more or less the same. Microsoft very much focused on three pillars, and I believe many other AI type companies focus more or less the same. First is security. You have to have a very secure AI system. That’s the basis. That’s also where the user tries to come from. Second is about safety you talk about online safety, particularly for those more vulnerable groups like children and women, how to address those issues. Again, the same social media problems like harassment, online safety, even suicide prevention, exists, if not get worse. The last one, at least, is privacy. That’s easy to understand. So, safety and security privacy. The three pillars are the key foundations for responsible or trustworthiness or other names. When we talk about the process for big tech or just traditional business like Starbucks, when they want to implement AI in their business, we have a massive means called the privacy by design in the social media era, which means privacy should be the first thing to consider when you develop a product. This is like a rule. ABC like a 101 for any product manager, right? You know, when I worked at the Meta, we always got a reminder like, you know, the engineers, right? It’s not like you have a great product idea and you talk to everyone and finally you think about, okay, I should talk to my privacy lawyer. All right, you should do it the other way around. The first of all, you should talk to is the privacy legal, the privacy team, right? Responsible AI poses a very similar approach. The first thing, when you develop an upgrade or a new service backed by AI, you should think about whether you can tick the three boxes, security, safety, and privacy for the AI services and product you’re going to launch. Microsoft has set a very good example when they’re developing the co-pilot. That’s their AI platform. for you users. So I hope that can give you a very rough sense of what responsible AI is about.
Grace Shao (24:39)
I think what you mentioned just now that stood out to me is that a lot of these big tech companies like Microsoft or Sell, they have very mature, legal, and safety teams in place, right? So it’s much easier for the developers to actually tap into their know-how and their knowledge. And obviously, like you said, an extension of how they use their regular content as well as not just content moderation, but also just product safety. But for startups, I don’t know if you work with them at all or not, but like,
I just the proliferation of AI tools right now, right? It’s like, it’s very, it’s very crazy right now. Basically, like you also kind of hinted at this where like, you know, developing a new product is so much easier than it was say 30 years ago. It’s not only that, like, you know, the language of coding has made it easier, but now we have a jented coding tools, right? So you can have vibe coding, whatnot. How do we actually understand product safety and like responsible AI when we start talking about new products within these startups. And also my question is on a broader like picture, how do we understand responsible AI in a big market like China where a lot of products are consumer facing AI versus maybe the US where it’s a lot more enterprise facing. Can you kind of give us some color on that?
George Chen (25:55)
Right. So first of all, startup, yes, you we do have some startup clients. I’m very glad that the startup clients we work with in the tech sector are very much, you know, either backed by some leading figures in the Central Valley or by global VCs. So I think that they do have, you know, like a stronger internal compliance control. Right. And over the years, I think all the big tech, you from know, met up to Microsoft, you know, to other companies. I think all the classic incidents, know, the lessons, remember, you know, when Mark, when Mark Zuckerberg had to apologize, you know, you know, the Cambridge Analytical incident, right? It feels like a not too far away, you for people who had short memory. I think that those incidents that did, you know, ⁓ serve as very good lessons, you know, for those, you know, I will say like a more US-funded back than startups. I think their goal is clear. If you want to get listed on Nasdaq someday, you’d better do things very right from the beginning. There are some naughty boys, cases from China. You probably noticed there are some AI-cub startups from China.
Like grab the content from Disney, know, Parliament, know, Sony, right, you know, to make those funny, like the AI, you know, effects. But in fact, it was like a serious violation of, you know, IP prototype content, you know, but those start up like, I don’t care. I just like to have fun. Like, let’s see how it goes. then suddenly, you know, they got like a 1 million to 2 million, you know, and then to 10 million users within a week. So, but they’re not going to go far away, right? There’ll be like a long series of this and that. So this is not the right approach. I do think that startups need to be very clear about the boundaries. It’s not like, okay, you are a startup, so you can lower your compliance requirement to do whatever you want. And the end of the day, you need to be responsible, not just responsible AI for the users, you also need to be responsible for your investors, right? So that’s on the on the startup part. In terms of compliance, think the startups in general do pay a lot of attention to compliance with different, especially now AI, as we mentioned, right? If you look at the APAC, there’s no unified approach to AI governance, right? It’s not like EU has AI efforts. So the compliance cost is indeed very high startups. This is the luxury Big Tech have. We just discussed Microsoft as a case study. So Microsoft has pretty decent size of legal team, security team, enforcement team, to support those three pillars, like safety, security, and privacy. But for a startup company, you can imagine they probably only have one legal, one policy manager for everything.
That is it. That is a challenge. And then this is also why a lot of companies complain about very tight regulatory environment in EU because as a startup you don’t want to spend all your money, not even like half of the money as your compliance cost, right? So I always joke with my friends like if you hire more lawyers than engineers for a tech company, I don’t think that’s right. So this is a constant challenge for startup, how to comply, but in the meantime, also keep innovating.
Grace Shao (29:28)
I think that’s really interesting because essentially, like you said, whether it’s a startup or enterprise, in many ways, it’s faced, they’re facing the same issue. But my issue right now with kind of the AI space is actually there’s lack of international standardization, right? So for example, like globally, wherever you go, you can’t really go stab someone. The rules around drugs or even other issues like driving and other safety issues may vary, but there is like a standardized base normalization or what we believe as humans that should not be done, which is essentially don’t kill people. Homicide is illegal anywhere you go, right?
So now with AI, regulation, is that like right now we’re not seeing countries come together and say, this is a sanitized belief that we should just not have. Maybe like you mentioned, child pornography and child safety is something very high on the radar, but even that can be quite subjective from culture to culture. So how do we make of that when we’re going to have AI proliferation across the economy and different touch points in our daily lives?
George Chen (30:33)
That’s a very important, interesting question and point you make. You you reminded me, you know, when I teach my students in the classroom, one of the examples I give them is, you know, I travel a lot, right? So different countries, to different countries, the first question I ask myself, like, which socket do they use for plug, right? And then even in EU, you know, like, well, in the UK, it’s no longer taught on EU.
But even you cross the border sometimes, know, from country to country, you need to, that’s why we always bring a travel adapter, right? So when it comes to AI governance, it’s actually the same problem. You absolutely right in the very spot, EU has the EU AI Act. If you are a startup, you want to expand into EU, no argument, no negotiation, you have to comply with EU AI Act. Plus, several other regulations like the Digital Market Act, the Digital Service Act, then plus GDPR. So to expand into EU is not easy. The compliance cost will be very high. But same thing in APAC. You go to different markets. Indonesia is going to have their own AI regulation versus Singapore, versus other markets. Ideally, the UN should take up a bigger role, a more powerful role to sort of you know, have, you know, control or supervision over like how AI should be used. Right. You know, think about the same question about telephone, you know, when telephone was invented, right? Why, you know, Hong Kong’s, you know, country code is A52, right? Why China is like A6, why US is 001? Because someone made the standard and for telephone code,
That was ITU, the International Telecommunications Union. So some people say we also need someone like ITU. Maybe the UN has an AI panel, but I don’t know how powerful the UN AI panel is. I mean, not to mention that the US government is not really a big fan of UN these days. So I agree, we should have international standards on AI, especially on AI safety as a key part of AI governance. We should have some principles.
So I think this is something all the countries are looking to. We will very soon have the new annual AI summit in India in February in 2026. I think that India also wants to use the AI summit as opportunity to discuss those standard issues. And also to a point, I don’t know how many already realize, actually US and China are not just competing.
In the aspect of AI technologies, like official recognition, deepfake, and other issues, but also compete with each other on AI governance. Basically, the and China are competing, like who’s going to write the rules for AI usage for the next generations. So this is also another flashpoint between the US and China when it comes to technology innovation, not to mention the two countries who are continuing to compete in the aspect of AI technologies, you who’s going to have more faster motors, whether it’s Gemini or DeepSeek in winning the battle. So that would be also a story we watch very closely in terms of competition and struggles.
Grace Shao (33:57)
Yeah, I think it’s also just because the technology is moving so fast right now. It’s really hard for regulators to keep up even domestically in each country at this point. So yeah, I do agree. I think we need some kind of international standardization. I met with Quaishou’s representative a week ago and it was very interesting to hear. They’re very ⁓ focused on the text to image and text to video kind of space. And basically they said in China,
To your point cyber security laws are one of the strictest in the world actually in terms of AI Content AI GC is also one of the strictest in the world She said that actually if you remove the watermark that is actually a criminal offense or like literally you will be like, you know Yeah, it’s quite interesting. And I mean on one hand you think it’s very extreme on the other hand I think it’s very needed right like to make sure that deep fake or the mouth practice or you know, Fabricated content does not spread
George Chen (34:37)
That’s right, yeah.
Grace Shao (34:52)
And kind of lead to the social arrest you mentioned earlier or company disturbance, etc. Or even human to a Harm. Anyway on that note, I want to talk about China. You are interviewed a lot by the media on China US Whether you want to frame as tensions or competition or you know or the race? you know, whatever we want to frame it there is going to be right now two camps essentially, right? ⁓ How do we actually understand the two ecosystems at a high level? Where are the real fault lines? Are they chips, cloud, data, or like you mentioned, regulatory rules? Help us understand the two ecosystems.
George Chen (35:21)
Right. So let’s talk about China. So US and China don’t just compete in technology. US and China also get more more clashes on AI governance in how the way AI should be regulated. US published the AI action plan under the Trump administration. The AI action plan published by the Trump administration is actually already a shift from the AI policy approach taken by President Biden when he was in office. When Biden was in office, it was more like about protection. Biden focused very much on the online safety and this and that. They even set up the US AI Safety Institute. When ⁓ Trump took over, things changed quite differently. Now, Trump is taking American first approach for know, like America’s version of AI innovation, right? Which means like how we can keep American competitive in the aspect of AI technologies. In the meantime, think the Trump administration also want to export the US governance model on AI to the rest of the world, to many of its allies, especially in Asia, you have like Japan, Taiwan, Korea. While China is also trying to influence perhaps mostly global South countries and Bayer Road countries to be more aligned with China’s AI governance ⁓ model. So the two countries are not just competing in technology, but also in the way how AI should be governed.
Grace Shao (37:00)
Think on that note, if you’re a developing country right now, whether you’re in Asia, Africa, Middle East, and you’re listening to pitches from both Washington and Beijing, like you said, essentially they want to capture the rest of the world, what questions should you be asking to avoid being locked into one ecosystem?
George Chen (37:17)
Right, I’m always asked by my friends from global service countries, which side should I take? And my answer is no, you shouldn’t take any side. You should take whatever that fits you to have a sort of combination of the best that you can take from both the US model and also the China model. In some ways, China was quite innovative to solve some unique challenges caused by like a... know, defake and this and that. But in the meantime, the people will say, oh, wow, you know, but you have to sacrifice a lot of, you know, all your privacy, right? You know, even the internet, you you to get on the internet in China, you you must be a real person. know, China has the real ID, you know, policy, right? In Hong Kong, not the same, you you can’t just have, you know, like a, a, a, like a, don’t need to, you know, you have, you know, even for the mobile phone, need to register a number with your real ID. But in the US, this is quite unthinkable. But the real ID approach in Hong Kong and China can certainly strengthen a lot of people’s policy concerns. Versus in the US, everybody can join the party, basically. That will also waste a lot of time. think in China, you will see very much led by companies ⁓ like the traditional BAT and DeepSeek and Huawei to enhance their AI governance through a more company-led plus state-supervised model on AI governance.
Grace Shao (38:48)
I think that’s really interesting. You just mentioned something that struck a chord with me because I was just in Singapore and I was reflecting on how I basically hated my experience there six years ago, seven years ago when I was single without kids because it feels very like, not control, but everything feels very watched and very sterile and you know, your point, everything is very top down. But this time going as a mother of very young children.
George Chen (39:04)
Right.
Grace Shao (39:10)
I loved it. was like, wow, it’s so clean. It’s so safe. I rather give them all my data so they can protect me. They know where to track like ad actors. And I think to your point is very interesting. The idea or the value of Liberty per se may be very different in different cultures and also might change as you go through different phases in your life. So it will be interesting to see how companies or countries choose which ecosystem to join, right? Based on their own.
George Chen (39:17)
Ha ha ha
Grace Shao (39:38)
belief system or value system. I want to ask you, how are the extra controls right now actually affecting companies operating between China and US? Because I know a lot of your clients probably are operating between these two large economies. You sit in Hong Kong and most of your clients, would assume, are actually like MNCs and have some kind of a... You use Hong Kong as some kind of a gateway to intern exit, mainland China, right?
George Chen (39:48)
Mm-hmm.
That’s right. think that the China model so far, the China model is basically a more multi-barad... the more parallelism, right? So like, you know, to work with different countries, more stakeholders approach. This is also what, know, Premier Li Chang, know, when he was in Shanghai, I think in July for the World AI Conference, you he also called AI as a public goods. You know, I find that that concept was quite interesting. Basically, said this is not just something you and I should exclusively hold, right? This is public goods. This is for, you know, like, almost like, you know, this is like for the fate of the whole mankind in the future. So we need to share the success, share in the growth, versus like the American approach is very clear. You know, this is America first, we need to take the lead. And America has always taken the in AI and technology ⁓ innovation. And again, you know, don’t get me wrong. I think both Li Qiang ⁓ as Premier for China and President Trump have their own very good reasons to manage AI in their own ways. One is to keep raising the American flag high and to make yourself a role model, right? The other is to have a more open model everybody can come and share. I think so far the Chinese model perhaps is more appealing to a lot of developing countries, given the-
It’s more like a cost efficiency and a top-down approach also boosts highly efficiency rather than more like a button-up, democratic approach. You need to talk to 10 companies that get alignment, this and that.
Grace Shao (41:39)
Yeah, actually, you just touched on something I was going to ask you. China has pushed up the AI Plus initiative and they were, like you mentioned, Li Qiang and them are just kind of embracing this idea of exporting AI to global south. But beyond the branding, I was going to ask you, do you think it’s actually successful? But it sounds like they are, right? It sounds like the global south is adopting China’s AI ecosystem because it’s more cost efficient, deployable, scalable given that’s open source, open weight, right? I think I want to ask you one last question on this section, is, you comment on China’s AI ecosystem law in the media. What is something that we’re missing here? What are people kind of missing, maybe even in mainstream media that you think is very important for people to know?
George Chen (42:06)
Right. That’s an interesting question. think that the international cooperation part is something I’m quite concerned about. China has a lot of good engineers. Actually, we also saw a lot of engineers coming back to China from the US. However, both sides, the US and China, should talk to each other more to maximize the research capacity for the overall interest of the whole mind can. So far it’s not happening. And then the result is, I also think AI, the reason why AI is so special is I believe AI also touches on ideology, the way how people think about things. So the US right now is very ⁓ US-centric, just focused on their AI. And then China is very much oriental focused, trying to focus on their version. So when the two AI is a basically, you know, a developer like in the parallel approach, you know, you don’t talk to each other and that will result in a more divided world when it comes to content moderation, when it comes to understanding, you know, certain issues, you know, which policy approach you take, you know, to explain a historical event, you know, for example, this, if the two countries, US and China, they don’t talk to each other, it’s not going to be helpful.
For the overall development in R &D. So again, when I teach my course about the YouTube governance, I use the world’s most popular apps as example. Can you believe, it may not be a surprise for you, actually seven out of 10 most popular apps are in English, originally from the US, very much from California. The other three apps are either from China or Singapore, it tells you something. I think when social media companies began to expand into the Asian region, a lot of countries were fearful of the impact that American social media could bring to their markets. They are also talking about the so-called digital colonialism. Which AI you use that will influence your thinking.
So I think in a way, people also need to be mindful whether you are too much into the US model AI, and then that also begin to change the way how you think about things. I actually tested the Chinese AI, and I tried some new features of the AI models. Sorry, I’m trying to think about which model I use. But my point is, the Chinese models are very local, very efficient. For example, when I’m in...
Beijing, right? I’m not going to use Chai Chi Bti, not because of BVPN, but I just find like the DeepSeek, you know, the database they have is more practical and efficient and timely than like say Gemini and Chai Chi Bti 4, right? So if I look for the best noodle restaurant, you know, the DeepSeek in Beijing actually, the answer from DeepSeek, you know, could be much better, more accurate, you know, than Gemini and OpenAI.
Grace Shao (45:17)
That’s really interesting because I think it reminded me of one of the Chinese LLM startups and they said that they’re actually working with local governments in the global south and exactly to your point is that they localize information, localize the culture or language. It goes beyond just the surface language, right? I think that’s really interesting. I wanted to ask you one last question, which is...
What is one differentiated view or non-consensus view you hold? This could be about the AI sphere or it could be about something in life.
George Chen (45:56)
I’m still trying to understand how we should position AI. There is a debate in the industry whether we should position AI as your assistant or more as your partner. I don’t have a clear answer on that. In some cases, I want AI just to be my assistant, which means I tell AI to do what and then you do exactly what I want you to do, right? But...
I also understand if you just position AI as your assistant, that that will also limit the potential of the development capacity of AI. But when you treat AI more as your partner, I’m thinking about one of my favorite movies. I don’t know if you remember, there was a movie called Her. There was an engineer talking to the computer. Scarlett Johansson played the sound part.
Grace Shao (46:41)
Scarlett Johansson, Yeah.
George Chen (46:50)
for the AIs and that was quite a romantic movie, but the ending was not very, it was not a happy ending. So I’m trying to think like if you produce AI as a partner, you can empower AI to do more things, but then, know, whether eventually we will also enter some dangerous territory, you know, to have AI to have, then we need to talk more like ethical issues, right? You treat the AI more as a partner. There were discussions, know, yes, know, AI doesn’t have feeling.
But should we also ask the AI to work for like a nonstop? If you think about human rights, should, if human will work for like 10 hours, 12 hours, you should have to get a break. Why we shouldn’t just keep asking AI all these questions, keep them running the models to get the result. And even AI doesn’t have the touch feeling, does AI have emotional intelligence? I believe at some point that AI will have emotional intelligence. That is to say, if you use AI as sort of a slave.
they will also be unhappy. So back to my point, I don’t have a clear answer, but I’m still wondering which sort of status, like a category, we should put AI into, more as AI assistant or as ⁓ a human car parking.
Grace Shao (47:58)
I think that conversation can like, warrants another conversation on its own, that topic, because I think to your point on the technology aspect, we are seeing a shift from AI just to consumer and just as a chatbot to agent AI, right? So to your point, know, ⁓ AI can actually start completing tasks for you. They can be more proactive, remind you to do things. They are more like a thought partner versus an assistant. But again, to like, you know, to even the conversation we had earlier, it’s like who...
George Chen (48:01)
Ha ha!
Grace Shao (48:26)
Who can play God? Who is to say, where is the line, right? And your 10 to 12 hour work ethic thing is very Chinese and American. Definitely in Europe, people are not working 12 hours a day. That is like a, that is a normal work day for Americans and Chinese. But again, yeah.
George Chen (48:40)
That’s right. You already see the cultural difference here, even for the real world and for the AI in different parts of the world.
Grace Shao (48:46)
Right? So who gets to set the standards? And I think it will become harder and harder. It’s because then it becomes more philosophical and ethical than just, you know, ⁓ practical, which is right now what we’re talking about AI safety is just like, okay, child pornography is fundamentally wrong. Like, homicide videos is not allowed. Don’t create fake videos of people doing fake things. That is very black and white. We can almost just all universally agree.
But when it becomes a cultural, evidently because they’re culture norms, even language norms, societal norms, et cetera, right? Or even each person’s emotional capacity is even different. Then who gets to decide when AI needs to stop, right? That’s definitely like a very interesting topic. And I’ve been having this conversation with friends as well. It’s like, has technology hit a point of actually further development does not progress society as a whole anymore? Or are we still actually benefiting from technological advancements. So anyway, I really, really appreciate that. And I can go on forever. This is an interesting topic. Thank you so much for your time, George. I really, really appreciate your insights and all the expertise and experience you bring to us.
AI Proem is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
By Grace ShaoMost AI policy conversations still orbit around Washington and Brussels, but Asia-Pacific is already writing a very different rulebook. In this episode, I talk with George Chen, Digital Partner at The Asia Group and former Meta policy executive, about how AI is actually being governed, built, and deployed across APAC, China, and the global south.
George traces his own path from journalism to big tech to advisory work, and uses that vantage point to explain why APAC is not “one market”—and why the EU analogy breaks down almost immediately. Countries like Japan, Korea, Singapore, and China are leaning into AI as a tool for economic recovery and industrial upgrading, often taking a much more pro-innovation, pro-growth stance than the EU’s more precautionary approach. At the same time, Southeast Asia is becoming the physical backbone of the AI build-out: Singapore as HQ and regulatory hub, with Malaysia, Indonesia, Thailand, and the Philippines hosting the data centers, power, and connectivity—along with all the local tensions that come with that.
We also get into what “responsible AI” actually looks like inside a company. Beyond the buzzwords, George breaks it down to three pillars—security, safety, and privacy—and talks through how mature players like Microsoft or Meta build these into product design from day one, versus the reality for startups trying to ship fast with one lawyer and a single policy person supporting multiple markets. He also makes the case that fragmented regulation and the lack of international standards are becoming a real tax on innovation, especially outside the US and EU.
Another big thread is the emerging US–China competition over AI governance itself. It’s no longer just about who has the best models or chips; it’s also about who exports their rules, norms, and defaults to the rest of the world. The US is pushing an “America-first” innovation and safety model to allies, while China is pitching AI as a kind of public good to the global south—combined with a more cost-efficient, top-down deployment model and very strict cyber and real-name rules at home. George argues this divergence is already shaping how content, deepfakes, and AI-generated media are treated in different jurisdictions.
We talk about the local edge of Chinese models—why in places like Beijing, models such as DeepSeek can be more useful than ChatGPT or Gemini for everyday queries because they’re trained on more localized, timely data. From there, we zoom out into the new AI talent map: countries like Indonesia, Vietnam, Kazakhstan, and Uzbekistan trying to position themselves as low-cost AI talent hubs and “back offices” for global AI companies as coding gives way to prompting and applied ML.
We close on a more philosophical note: should AI be built as a subordinate assistant or a true partner? George shares his uncertainty here, and we talk about what happens when we give AI more agency, emotional intelligence, and continuous workloads. At some point, the conversation shifts from safety checklists to ethics, culture, and even “digital colonialism”: whose values, whose norms, and whose worldview are encoded into the systems that end up mediating how we see the world.
In today’s world, there’s no shortage of information. Knowledge is abundant, perspectives are everywhere. But true insight doesn’t come from access alone—it comes from differentiated understanding. It’s the ability to piece together scattered signals, cut through the noise and clutter, and form a clear, original perspective on a situation, a trend, a business, or a person. That’s what makes understanding powerful.
Every episode, I bring in a guest with a unique point of view on a critical matter, phenomenon, or business trend—someone who can help us see things differently.
For more information on the podcast series, see here.
AI-generated transcript.
Grace Shao (00:00)
Hey George, thank you so much for joining us today. I’ve been really excited and waiting for this chat. You know, you are a very busy man. You’re constantly traveling. I can barely reach you in Hong Kong. So really appreciate your time today. Sit down with me and share your insights with my followers and some of our listeners. To start with, you’ve worn many, many hats. A journalist, tech executive, policy advisor, and now a partner at the Asia Group where you advise a lot of force, you’re probably helping companies on, I believe, geopolitical positioning, right?
George Chen (00:29)
Thank you. First of all, thanks for the invite. It’s quite an honor to join a growing cohort of guests for your program. Really happy to have a discussion about tech and policy issues because I think you’re right. My first 10 years in media, similar to your background, and most recent decade, I work very much on the intersection between technology and policy.
My biggest takeaway from my last job at Meta, one of the platform operators in the world, is sometimes we very much focus on technology development, like the breakthrough, while the resources for policy support are actually quite limited, especially in the Asia-Pacific region compared with the US. think for all the...
Big tech in the US, given the politics domestically, they have to do a lot on political and policy part. But for Asia Pacific, the policy work, compared with other investments, like in data center, technology, hiring of engineers, it’s still very, very, very understaffed, under-resourced, and sometimes under-appreciated. This is why we need to...
address some concerns about policy issues as we advance the technological part. Because I always tell my students, tell my friends, tell my partners that the key challenge, even you have CharGBT 5.0 or 6.0, the key challenge is how to get the government to understand new technologies and also get the users to have more trust in those new technologies. Otherwise, nobody use it, nobody trust those things. And that makes them.
Grace Shao (02:15)
I think that’s super helpful. A lot of times when we think about policy or safety issues, we think about it as like a siloed part of the ecosystem. But really like exactly to your point, like, you know, we need the developers to understand the concerns of the users. We need the users to understand the safety risks of the products. We need the regulators to understand what it means to implement these like technology throughout our economy, right? So there’s it’s like, it’s actually all interrelated.
I think today to start off with, let’s like go into big tech, just give in your background with Metta, working with a lot of these big tech companies. You’re based in Hong Kong for the listeners, but actually work predominantly for American big tech companies. What is like the, I guess, the fundamental feel right now as we see the evolution be from a social media company for AI to AI of focused company as this is now the forefront of their strategy.
George Chen (03:11)
Right, so for the Asia-Pacific region, it’s big. I always try to explain to my clients and friends, when people talk about Asia-Pacific, the first gross perception, perhaps from Western perspective, is, okay, treat Asia-Pacific like the EU, right? But EU is a single market. They have very much shared the language, English, also one currency and they have the European Parliament to pass legislation for EU member countries. Asia-Pacific is far diverse, far different, and much bigger. So it’s hard to just copy whatever works in EU and then let’s also do it in APAC. Using AI regulations as a clear and classic example, you know, you is the first You know government, you know to have the world’s first AI act, right? But the so-called the Brussels effect didn’t really happen this time in Asia Pacific countries You didn’t see like all the countries, you know, like Singapore or you know Japan to quickly follow up on You know to have a similar like a risk-based approach or penalty focused approach to AI, right? Instead, you know if you look at Japan. They are very much welcoming. Japan declared to be, they want to be the most friendly open country for AI developments. The first data exception for AI testing was actually in Japan. And then Singapore followed, and Hong Kong’s also not considering, right? So APAC took a very different regulatory approach to AI versus EU. I think this is something all the American tech companies have to realize. It’s not like America leads technology and then EU matters because of the special relationship between US and EU. So as I mentioned at the beginning, the resources for public policy work are very limited in AIPAC, but EU still enjoy a lot of resources, this English-speaking market that has lot of political connections. And then Asia-Pacific, when it comes to policy enforcement, like policy support it feels more like a third country, overall speaking Asia-Pacific as a whole. So there’s still a lot of educational process, the learning curve for big tech, largely from the US to understand what are the challenges, what are the opportunities in the Asia-Pacific market. However, I also need to highlight for many big platforms, Asia-Pacific is actually not just the largest market by internet users for American tech companies, for almost for all of them, right? You know, in terms of user base. It is also a very important revenue source, know, the source of revenue for those American companies. So now you see the imbalance, right? You you make a lot of money from Asia-Pacific, but the support you give to Asia-Pacific is quite limited, know, compared to in the US ⁓ and EU. So the learning curve is there.
American tech companies want to have a more sustainable development and want to have a more constructive relationship, sort of a more constructive partnership with Asian governments. I think there’s still a lot of work to do.
Grace Shao (06:31)
I think that’s really helpful to help listeners understand because sometimes people also approach me, they’re like, what’s APAC? I’m like, APAC is gazillion different markets and it’s actually so fragmented, right? And I think people sometimes misunderstand it kind of similar to like what you said. They think it’s like a EU. It’s not like actually there’s no consistency in currency. There’s no consistency language or no consistency actually even income or anything. So it’s quite scattered. that sense, I actually want to ask you, you mentioned something just now.
George Chen (06:39)
That’s right.
Grace Shao (06:58)
Japan and Korea this time is taking a more proactive actually approach as the countries themselves are taking more proactive approaches to really embracing AI and you know actually compared to EU’s more wait and see or more protective measures right which is not very yeah not not not what they usually would do what do think the trade-offs are actually in that sense do you actually think that means we are seeing more innovation or more technological breakthroughs or even economic diffusion of the technology right now in Japan and Korea.
George Chen (07:30)
Yeah, yeah, let me put it this way. So AI technology, you know, we believe, you know, still in the very early stage, right? Even you talking about, you know, trying to redefine Polisero, you know, but, you know, if you put that in the overall development for AGI, you know, we are still very much under the, in the early stage of the curve. So for Asia Pacific region, yes, it’s diverse, you know, but we can still see some sort of patterns, similarities in terms of different AI strategies. At the Asia group, my firm, we did a research paper on the different regulatory approaches to AI governance in the vast Asia-Pacific region, from Australia to even in Mongolian. Long story short, you are right. Some countries in Asia-Pacific take ⁓ a more economic benefit focused approach, right? Take a more innovation focused approach. Countries like Japan, Korea, Singapore, they want to see how AI can help them to drive economic impact, right? It doesn’t mean like they don’t care about the safety, the security issues, but they want to have certain flexibility, to encourage more startups to succeed, right?
in to a certain degree, actually maybe too many surprising because China is very well known as one of the strictest internet market in the world. Basically, none of the American, very few, I will say, like very few American tech companies can really succeed in China. The only two exception in my mind are like Tesla and Apple. But they are more like consumer related if you touch on content.
We talk about Google and Meta, that’s a completely different story. But even so, China at this time is also taking a more pro-innovation, pro-economy approach to AI development because this is a very top-down approach because President Xi saw the success of DeepSeek and he basically wanted more success stories like DeepSeek. Japan and Korea are in more or less the same category, like pro-innovation, pro-economic recovery. For Japan,
I talked with my friends and colleagues in Japan. The sentiment in Japan is like, we’ve lost 30 years, guess, three decades in terms of economic recovery. This is like our last chart. And Japan has been quite strong in robotics, those fundamental technology development. So that’s the sentiment in Japan. We have to grab the AI opportunity. In EU, have to say, part of the reason why EU is so keen to develop regulations, legislation in recent like five to 10 years. In my view, some may argue and disagree. I think the EU does come with a sense of protectionism, right? Because if you look at all the market leaders, you name it, OpenAI, Google, Microsoft, AWS, all of them are big tech from America, right?
I remember there was a chart to list the top 10 most advanced AI models. There’s only one model from EU, actually from France. The rest are from the US and China. So that tells a lot. If you are the EU regulators, look at from a competition perspective, you will more or less have a sense of anxiety. And then you will look at all those big tags like, no, we need to do something, like a country that pays in the name of safety and security. I’m not blaming EU regulators for doing it. But in the meantime, we also hear more and more concerns, even from the state heads, like French President Macron. He’s concerned that tough regulation in EU on AI will harm innovation in the EU rather than help European startups.
Grace Shao (11:14)
I think we can double click on China later. It’s going to have its own special segment for sure. China is just such a big story. But for some context for lot of listeners, Meta and Google, the likes of these companies actually do exist in mainland, but they mostly only have their ad services there. So basically they help enterprises with their ad sales to the West. But to Georgia’s point, they’re not really operating at the full capacity that you would see them elsewhere in the world.
George Chen (11:33)
That’s right.
Grace Shao (11:39)
Now I do want to kind of finish up on the APAC kind of narrative and then the APAC focus right now, which is for ASEAN right now. Let’s set apart like South Korea and Japan and China, just the Northeast Asian countries are frankly economically much more, you know, like developed as well as more economically focused, right? For ASEAN right now, especially since I just went to Singapore last week, it’s really interesting. Like we basically have the players, like you said, OpenAI, Google, Meta, all of these. Well, APAC headquarters based in Singapore, even the 10 cents and the bite dance of the world, right? However, Singapore is tiny, like just in terms of size and its resources. So what we’re seeing is they’re extracting essentially all the compute energy data centers, connectivity, any of the infrastructure you need to think of actually to Malaysia, in Malaysia, in Indonesia, in Thailand, even they’re building them out over there. How do we actually understand this right now? Is this a net benefit for these economies? Or is it actually really hurting the local economies and, you know, in some ways exploiting them and really just only serving the companies based out in Singapore? How do we understand that?
George Chen (12:45)
That’s right. So let’s talk about Southeast Asia. It’s complicated. When we’re talking about APEC, actually the most complicated part, I think it’s like Southeast Asia. Because when we talk about Korea, Japan, China, even China is a socialist country, but in terms of economic models, there’s a lot of elements related to capitalism. So those are the most economic economics in the Northeast Asia. Southeast Asia is very diverse, very different from each other.
Singapore is like the exception, the most advanced economy in South East Asia. But they come in terms of population, the user base is pretty small, like 4 million, 5 million population, even smaller than Hong Kong. You’re right, a lot of the tech companies, even before AI become a trend, they talk about like Meta, Google, Apple, they all had their headquarters in Singapore. It has really become the hub for big tech over the past 10 decades. Unfortunately, Hong Kong, thank God that we still have big banks like JP Morgan, Goldman Sachs in Hong Kong, we remain as a financial center. But in the aspect of tech innovation, you have to give some respect to Singapore. They did very well to attract those tech headquarters. So this also became, you are right, sort of a point of
I don’t know how to describe it. Some of the neighboring countries are jealous, certainly jealous of the success in Singapore, right? And then countries like Indonesia or Malaysia also wondering like how to get the benefits from the fact that all the big tech have their headquarters, regional headquarters in Singapore, right? But if they only care about the relationship with Singapore or in government, because they have headquarters in Singapore and their neighboring countries will not get any benefits, Malaysia actually founded their own ways in the regional AI race. And their offer is data center because of the stable supplies of electricity, relatively much cheaper labor costs and land costs and overall cost for data center operations. So this is why Malaysia got a lot of attention from Big Tech too, like AWS, Microsoft, they all made huge investments in Malaysia. Not AI, R &D, maybe yet, but first our data center. In the AI industry, we have a popular saying that AI is like electricity. Sam Altman said that. Basically, this is like the new kind of utilities for everyone’s life, right? But to develop AI you also need electricity. You need a lot of investments in infrastructure. This is why Malaysian already stand out and Philippines too in a way, as sort of the cheap, reliable alternative to data center investments in addition to Singapore. Everybody complains about Singapore in terms of living costs, even like how difficult it is to get work committed in Singapore these days. Even you have a call like qualify the job, but it doesn’t mean like you will get work permit immediately. Actually, in comparison, Hong Kong is doing quite well to attract the talents more easily these days in the tech and financial space. Back to the AI governance issue, yes, Southeast Asia also took a very different approach in comparison with Korea and Japan. think Singapore is an exception. Otherwise, if you look at the countries like Indonesia, if you look at countries like Vietnam, they still take a very more security-focused ⁓ approach, especially Vietnam, given their political system, right? So, Meta used to, and I think Meta still have a lot of problems in Vietnam. One of the key issues is about content and moderation, right? There’s a lot of human rights and similar struggles, Thailand too. So those countries, I feel like the sort of the older problems from the social media era was not really solved yet.
And those problems will be brought into the AGI era. And when the government look at the AI, their first question is, okay, so how can I prevent people from using AI to cause any unnecessary trouble, which means like a social instability, right? So that will be the same older problems facing big tech companies. And that tells you a lot when those countries look at AI, they still come from very much a security focused on mindset.
Grace Shao (17:09)
That’s really fascinating because actually when we were just talking about the infrastructure build out on my end, really just I’ve done some research and writing on, you know, the Johor build out and the over capacity with data centers right now. And the unfortunate cause that’s just like the local infrastructure is not able to actually support the the rampant build out. It’s actually affecting the livelihood of people. Right. But your point is really interesting. I didn’t really think about it that way. It’s actually for the from the perspective of these big tech actually.
It’s to prevent bad actors using their technology to actually propel even further, like, you know, bad, intentional, harmful content, right? And then essentially, like you said, cause social unrest that would really be very troublesome for the local government. So I guess from the policy perspective, from the social media era. But what would be something different? What would be something that, you know, big tech will have to start thinking about that they didn’t even have to worry about before.
George Chen (18:03)
Right. Well, you know, as I said, know, lot of the older problems from social media era will remain in the AI era, such as misinformation, know, skin, political speech, you know, especially for countries like Vietnam and Thailand, the real content. It’s always, you know, when I worked at the Meta, you know, those South Eastern countries are always considered as like a high risk countries, you know, when it comes to content policy risks, right?
On the other side, Vietnam, Thailand, Indonesia are much bigger. They are also smart. They also consider AI as opportunity. So they are thinking, how can I use AI to train the next generation of talents, digital talents? Those countries also have relatively younger demographics. So there’s a lot of smart kids who can get on AI and then to learn. So I think that also posed the opportunity for partnership for those American tech companies. Can we do some training program for the purpose to grow the next generation of AI talents in those countries? I think those governments will be very much welcome those initiatives. And this is not just happening in Southeast Asia. You may know somehow I also have my exchange of career experience in Central Asia. I can tell you even countries like Kazakhstan and Uzbekistan trying to focus on talented developments. Because they believe, if you think about learning how to code 10 years ago, this is actually quite an expensive experiment. You need to get professional tutors. You need to get long hours to learn one language. When I grew up, I learned like, I don’t know if you know, we started with Microsoft, the DOS system, and then C12, no one talking about it.
So it took like a year to just get a basic sense of those languages, right? But like AI, you don’t need to learn ⁓ the code. It’s more important for you to understand how to write a proper talk. So those countries like Uzbekistan and Kazakhstan also catch up trying to be the back office for big tech to train, to grow the basic, like the junior engineers. So hopefully they can get some basic work done in those countries for cheap labor cost reasons rather than you need to hire all those engineers in Silicon Valley. And I think that posed the same sort of opportunities for Indonesia, Vietnam, Thailand and other Asian countries.
Grace Shao (20:30)
That’s really interesting. So there’s like a reshuffling of talent and then also like just the talent strategy is actually changing from the social media era or just like the big tech era. I want to kind of look at responsible AI. So we hear the phrase a lot, right? Responsible AI, AI safety from your experience right now.
What does responsible AI actually look like inside of a company and what changes in org charts, KPIs, or decision making when we’re talking about responsible AI? What are the metrics we must track?
George Chen (20:59)
That’s right. Okay, so you mentioned that I wear a lot of hats. You I don’t want to speak like a professor, but I do teach a course at the University of Hong Kong and the Tsinghua University. My course is about digital society and governance. One of the lectures is actually about AI governance for corporates. So responsible AI is a term, you know, very popular, not just in the tech industry, but you now hear more and more just in business in general, right?
It’s, in my view, responsible AI is something like the privacy statement, right? You know, for different companies, you know, when you go to a website now, like the privacy statement already become like the very normal thing, right? You know, when you use a service, you know, have to get, you know, they have to get the user consent first, and they need to tell you, you know, what kind of data they’re collecting for what purpose. That’s the privacy statement. Every website, you will find, you know, a privacy statement. Responsible AI is similar.
So the government is doing their job ⁓ from regulatory perspective, from self regulatory perspective, the government work with NGOs and associations to have an industry code. But for corporates, responsible AI is kind of like the business led principles. I want to use Microsoft as a perfect example. I think Microsoft is leading the way how business can take a more responsible, sustainable approach to AI. Microsoft is responsible for AI. They call it the trustworthy AI, but it’s just the name change, more or less the same. Microsoft very much focused on three pillars, and I believe many other AI type companies focus more or less the same. First is security. You have to have a very secure AI system. That’s the basis. That’s also where the user tries to come from. Second is about safety you talk about online safety, particularly for those more vulnerable groups like children and women, how to address those issues. Again, the same social media problems like harassment, online safety, even suicide prevention, exists, if not get worse. The last one, at least, is privacy. That’s easy to understand. So, safety and security privacy. The three pillars are the key foundations for responsible or trustworthiness or other names. When we talk about the process for big tech or just traditional business like Starbucks, when they want to implement AI in their business, we have a massive means called the privacy by design in the social media era, which means privacy should be the first thing to consider when you develop a product. This is like a rule. ABC like a 101 for any product manager, right? You know, when I worked at the Meta, we always got a reminder like, you know, the engineers, right? It’s not like you have a great product idea and you talk to everyone and finally you think about, okay, I should talk to my privacy lawyer. All right, you should do it the other way around. The first of all, you should talk to is the privacy legal, the privacy team, right? Responsible AI poses a very similar approach. The first thing, when you develop an upgrade or a new service backed by AI, you should think about whether you can tick the three boxes, security, safety, and privacy for the AI services and product you’re going to launch. Microsoft has set a very good example when they’re developing the co-pilot. That’s their AI platform. for you users. So I hope that can give you a very rough sense of what responsible AI is about.
Grace Shao (24:39)
I think what you mentioned just now that stood out to me is that a lot of these big tech companies like Microsoft or Sell, they have very mature, legal, and safety teams in place, right? So it’s much easier for the developers to actually tap into their know-how and their knowledge. And obviously, like you said, an extension of how they use their regular content as well as not just content moderation, but also just product safety. But for startups, I don’t know if you work with them at all or not, but like,
I just the proliferation of AI tools right now, right? It’s like, it’s very, it’s very crazy right now. Basically, like you also kind of hinted at this where like, you know, developing a new product is so much easier than it was say 30 years ago. It’s not only that, like, you know, the language of coding has made it easier, but now we have a jented coding tools, right? So you can have vibe coding, whatnot. How do we actually understand product safety and like responsible AI when we start talking about new products within these startups. And also my question is on a broader like picture, how do we understand responsible AI in a big market like China where a lot of products are consumer facing AI versus maybe the US where it’s a lot more enterprise facing. Can you kind of give us some color on that?
George Chen (25:55)
Right. So first of all, startup, yes, you we do have some startup clients. I’m very glad that the startup clients we work with in the tech sector are very much, you know, either backed by some leading figures in the Central Valley or by global VCs. So I think that they do have, you know, like a stronger internal compliance control. Right. And over the years, I think all the big tech, you from know, met up to Microsoft, you know, to other companies. I think all the classic incidents, know, the lessons, remember, you know, when Mark, when Mark Zuckerberg had to apologize, you know, you know, the Cambridge Analytical incident, right? It feels like a not too far away, you for people who had short memory. I think that those incidents that did, you know, ⁓ serve as very good lessons, you know, for those, you know, I will say like a more US-funded back than startups. I think their goal is clear. If you want to get listed on Nasdaq someday, you’d better do things very right from the beginning. There are some naughty boys, cases from China. You probably noticed there are some AI-cub startups from China.
Like grab the content from Disney, know, Parliament, know, Sony, right, you know, to make those funny, like the AI, you know, effects. But in fact, it was like a serious violation of, you know, IP prototype content, you know, but those start up like, I don’t care. I just like to have fun. Like, let’s see how it goes. then suddenly, you know, they got like a 1 million to 2 million, you know, and then to 10 million users within a week. So, but they’re not going to go far away, right? There’ll be like a long series of this and that. So this is not the right approach. I do think that startups need to be very clear about the boundaries. It’s not like, okay, you are a startup, so you can lower your compliance requirement to do whatever you want. And the end of the day, you need to be responsible, not just responsible AI for the users, you also need to be responsible for your investors, right? So that’s on the on the startup part. In terms of compliance, think the startups in general do pay a lot of attention to compliance with different, especially now AI, as we mentioned, right? If you look at the APAC, there’s no unified approach to AI governance, right? It’s not like EU has AI efforts. So the compliance cost is indeed very high startups. This is the luxury Big Tech have. We just discussed Microsoft as a case study. So Microsoft has pretty decent size of legal team, security team, enforcement team, to support those three pillars, like safety, security, and privacy. But for a startup company, you can imagine they probably only have one legal, one policy manager for everything.
That is it. That is a challenge. And then this is also why a lot of companies complain about very tight regulatory environment in EU because as a startup you don’t want to spend all your money, not even like half of the money as your compliance cost, right? So I always joke with my friends like if you hire more lawyers than engineers for a tech company, I don’t think that’s right. So this is a constant challenge for startup, how to comply, but in the meantime, also keep innovating.
Grace Shao (29:28)
I think that’s really interesting because essentially, like you said, whether it’s a startup or enterprise, in many ways, it’s faced, they’re facing the same issue. But my issue right now with kind of the AI space is actually there’s lack of international standardization, right? So for example, like globally, wherever you go, you can’t really go stab someone. The rules around drugs or even other issues like driving and other safety issues may vary, but there is like a standardized base normalization or what we believe as humans that should not be done, which is essentially don’t kill people. Homicide is illegal anywhere you go, right?
So now with AI, regulation, is that like right now we’re not seeing countries come together and say, this is a sanitized belief that we should just not have. Maybe like you mentioned, child pornography and child safety is something very high on the radar, but even that can be quite subjective from culture to culture. So how do we make of that when we’re going to have AI proliferation across the economy and different touch points in our daily lives?
George Chen (30:33)
That’s a very important, interesting question and point you make. You you reminded me, you know, when I teach my students in the classroom, one of the examples I give them is, you know, I travel a lot, right? So different countries, to different countries, the first question I ask myself, like, which socket do they use for plug, right? And then even in EU, you know, like, well, in the UK, it’s no longer taught on EU.
But even you cross the border sometimes, know, from country to country, you need to, that’s why we always bring a travel adapter, right? So when it comes to AI governance, it’s actually the same problem. You absolutely right in the very spot, EU has the EU AI Act. If you are a startup, you want to expand into EU, no argument, no negotiation, you have to comply with EU AI Act. Plus, several other regulations like the Digital Market Act, the Digital Service Act, then plus GDPR. So to expand into EU is not easy. The compliance cost will be very high. But same thing in APAC. You go to different markets. Indonesia is going to have their own AI regulation versus Singapore, versus other markets. Ideally, the UN should take up a bigger role, a more powerful role to sort of you know, have, you know, control or supervision over like how AI should be used. Right. You know, think about the same question about telephone, you know, when telephone was invented, right? Why, you know, Hong Kong’s, you know, country code is A52, right? Why China is like A6, why US is 001? Because someone made the standard and for telephone code,
That was ITU, the International Telecommunications Union. So some people say we also need someone like ITU. Maybe the UN has an AI panel, but I don’t know how powerful the UN AI panel is. I mean, not to mention that the US government is not really a big fan of UN these days. So I agree, we should have international standards on AI, especially on AI safety as a key part of AI governance. We should have some principles.
So I think this is something all the countries are looking to. We will very soon have the new annual AI summit in India in February in 2026. I think that India also wants to use the AI summit as opportunity to discuss those standard issues. And also to a point, I don’t know how many already realize, actually US and China are not just competing.
In the aspect of AI technologies, like official recognition, deepfake, and other issues, but also compete with each other on AI governance. Basically, the and China are competing, like who’s going to write the rules for AI usage for the next generations. So this is also another flashpoint between the US and China when it comes to technology innovation, not to mention the two countries who are continuing to compete in the aspect of AI technologies, you who’s going to have more faster motors, whether it’s Gemini or DeepSeek in winning the battle. So that would be also a story we watch very closely in terms of competition and struggles.
Grace Shao (33:57)
Yeah, I think it’s also just because the technology is moving so fast right now. It’s really hard for regulators to keep up even domestically in each country at this point. So yeah, I do agree. I think we need some kind of international standardization. I met with Quaishou’s representative a week ago and it was very interesting to hear. They’re very ⁓ focused on the text to image and text to video kind of space. And basically they said in China,
To your point cyber security laws are one of the strictest in the world actually in terms of AI Content AI GC is also one of the strictest in the world She said that actually if you remove the watermark that is actually a criminal offense or like literally you will be like, you know Yeah, it’s quite interesting. And I mean on one hand you think it’s very extreme on the other hand I think it’s very needed right like to make sure that deep fake or the mouth practice or you know, Fabricated content does not spread
George Chen (34:37)
That’s right, yeah.
Grace Shao (34:52)
And kind of lead to the social arrest you mentioned earlier or company disturbance, etc. Or even human to a Harm. Anyway on that note, I want to talk about China. You are interviewed a lot by the media on China US Whether you want to frame as tensions or competition or you know or the race? you know, whatever we want to frame it there is going to be right now two camps essentially, right? ⁓ How do we actually understand the two ecosystems at a high level? Where are the real fault lines? Are they chips, cloud, data, or like you mentioned, regulatory rules? Help us understand the two ecosystems.
George Chen (35:21)
Right. So let’s talk about China. So US and China don’t just compete in technology. US and China also get more more clashes on AI governance in how the way AI should be regulated. US published the AI action plan under the Trump administration. The AI action plan published by the Trump administration is actually already a shift from the AI policy approach taken by President Biden when he was in office. When Biden was in office, it was more like about protection. Biden focused very much on the online safety and this and that. They even set up the US AI Safety Institute. When ⁓ Trump took over, things changed quite differently. Now, Trump is taking American first approach for know, like America’s version of AI innovation, right? Which means like how we can keep American competitive in the aspect of AI technologies. In the meantime, think the Trump administration also want to export the US governance model on AI to the rest of the world, to many of its allies, especially in Asia, you have like Japan, Taiwan, Korea. While China is also trying to influence perhaps mostly global South countries and Bayer Road countries to be more aligned with China’s AI governance ⁓ model. So the two countries are not just competing in technology, but also in the way how AI should be governed.
Grace Shao (37:00)
Think on that note, if you’re a developing country right now, whether you’re in Asia, Africa, Middle East, and you’re listening to pitches from both Washington and Beijing, like you said, essentially they want to capture the rest of the world, what questions should you be asking to avoid being locked into one ecosystem?
George Chen (37:17)
Right, I’m always asked by my friends from global service countries, which side should I take? And my answer is no, you shouldn’t take any side. You should take whatever that fits you to have a sort of combination of the best that you can take from both the US model and also the China model. In some ways, China was quite innovative to solve some unique challenges caused by like a... know, defake and this and that. But in the meantime, the people will say, oh, wow, you know, but you have to sacrifice a lot of, you know, all your privacy, right? You know, even the internet, you you to get on the internet in China, you you must be a real person. know, China has the real ID, you know, policy, right? In Hong Kong, not the same, you you can’t just have, you know, like a, a, a, like a, don’t need to, you know, you have, you know, even for the mobile phone, need to register a number with your real ID. But in the US, this is quite unthinkable. But the real ID approach in Hong Kong and China can certainly strengthen a lot of people’s policy concerns. Versus in the US, everybody can join the party, basically. That will also waste a lot of time. think in China, you will see very much led by companies ⁓ like the traditional BAT and DeepSeek and Huawei to enhance their AI governance through a more company-led plus state-supervised model on AI governance.
Grace Shao (38:48)
I think that’s really interesting. You just mentioned something that struck a chord with me because I was just in Singapore and I was reflecting on how I basically hated my experience there six years ago, seven years ago when I was single without kids because it feels very like, not control, but everything feels very watched and very sterile and you know, your point, everything is very top down. But this time going as a mother of very young children.
George Chen (39:04)
Right.
Grace Shao (39:10)
I loved it. was like, wow, it’s so clean. It’s so safe. I rather give them all my data so they can protect me. They know where to track like ad actors. And I think to your point is very interesting. The idea or the value of Liberty per se may be very different in different cultures and also might change as you go through different phases in your life. So it will be interesting to see how companies or countries choose which ecosystem to join, right? Based on their own.
George Chen (39:17)
Ha ha ha
Grace Shao (39:38)
belief system or value system. I want to ask you, how are the extra controls right now actually affecting companies operating between China and US? Because I know a lot of your clients probably are operating between these two large economies. You sit in Hong Kong and most of your clients, would assume, are actually like MNCs and have some kind of a... You use Hong Kong as some kind of a gateway to intern exit, mainland China, right?
George Chen (39:48)
Mm-hmm.
That’s right. think that the China model so far, the China model is basically a more multi-barad... the more parallelism, right? So like, you know, to work with different countries, more stakeholders approach. This is also what, know, Premier Li Chang, know, when he was in Shanghai, I think in July for the World AI Conference, you he also called AI as a public goods. You know, I find that that concept was quite interesting. Basically, said this is not just something you and I should exclusively hold, right? This is public goods. This is for, you know, like, almost like, you know, this is like for the fate of the whole mankind in the future. So we need to share the success, share in the growth, versus like the American approach is very clear. You know, this is America first, we need to take the lead. And America has always taken the in AI and technology ⁓ innovation. And again, you know, don’t get me wrong. I think both Li Qiang ⁓ as Premier for China and President Trump have their own very good reasons to manage AI in their own ways. One is to keep raising the American flag high and to make yourself a role model, right? The other is to have a more open model everybody can come and share. I think so far the Chinese model perhaps is more appealing to a lot of developing countries, given the-
It’s more like a cost efficiency and a top-down approach also boosts highly efficiency rather than more like a button-up, democratic approach. You need to talk to 10 companies that get alignment, this and that.
Grace Shao (41:39)
Yeah, actually, you just touched on something I was going to ask you. China has pushed up the AI Plus initiative and they were, like you mentioned, Li Qiang and them are just kind of embracing this idea of exporting AI to global south. But beyond the branding, I was going to ask you, do you think it’s actually successful? But it sounds like they are, right? It sounds like the global south is adopting China’s AI ecosystem because it’s more cost efficient, deployable, scalable given that’s open source, open weight, right? I think I want to ask you one last question on this section, is, you comment on China’s AI ecosystem law in the media. What is something that we’re missing here? What are people kind of missing, maybe even in mainstream media that you think is very important for people to know?
George Chen (42:06)
Right. That’s an interesting question. think that the international cooperation part is something I’m quite concerned about. China has a lot of good engineers. Actually, we also saw a lot of engineers coming back to China from the US. However, both sides, the US and China, should talk to each other more to maximize the research capacity for the overall interest of the whole mind can. So far it’s not happening. And then the result is, I also think AI, the reason why AI is so special is I believe AI also touches on ideology, the way how people think about things. So the US right now is very ⁓ US-centric, just focused on their AI. And then China is very much oriental focused, trying to focus on their version. So when the two AI is a basically, you know, a developer like in the parallel approach, you know, you don’t talk to each other and that will result in a more divided world when it comes to content moderation, when it comes to understanding, you know, certain issues, you know, which policy approach you take, you know, to explain a historical event, you know, for example, this, if the two countries, US and China, they don’t talk to each other, it’s not going to be helpful.
For the overall development in R &D. So again, when I teach my course about the YouTube governance, I use the world’s most popular apps as example. Can you believe, it may not be a surprise for you, actually seven out of 10 most popular apps are in English, originally from the US, very much from California. The other three apps are either from China or Singapore, it tells you something. I think when social media companies began to expand into the Asian region, a lot of countries were fearful of the impact that American social media could bring to their markets. They are also talking about the so-called digital colonialism. Which AI you use that will influence your thinking.
So I think in a way, people also need to be mindful whether you are too much into the US model AI, and then that also begin to change the way how you think about things. I actually tested the Chinese AI, and I tried some new features of the AI models. Sorry, I’m trying to think about which model I use. But my point is, the Chinese models are very local, very efficient. For example, when I’m in...
Beijing, right? I’m not going to use Chai Chi Bti, not because of BVPN, but I just find like the DeepSeek, you know, the database they have is more practical and efficient and timely than like say Gemini and Chai Chi Bti 4, right? So if I look for the best noodle restaurant, you know, the DeepSeek in Beijing actually, the answer from DeepSeek, you know, could be much better, more accurate, you know, than Gemini and OpenAI.
Grace Shao (45:17)
That’s really interesting because I think it reminded me of one of the Chinese LLM startups and they said that they’re actually working with local governments in the global south and exactly to your point is that they localize information, localize the culture or language. It goes beyond just the surface language, right? I think that’s really interesting. I wanted to ask you one last question, which is...
What is one differentiated view or non-consensus view you hold? This could be about the AI sphere or it could be about something in life.
George Chen (45:56)
I’m still trying to understand how we should position AI. There is a debate in the industry whether we should position AI as your assistant or more as your partner. I don’t have a clear answer on that. In some cases, I want AI just to be my assistant, which means I tell AI to do what and then you do exactly what I want you to do, right? But...
I also understand if you just position AI as your assistant, that that will also limit the potential of the development capacity of AI. But when you treat AI more as your partner, I’m thinking about one of my favorite movies. I don’t know if you remember, there was a movie called Her. There was an engineer talking to the computer. Scarlett Johansson played the sound part.
Grace Shao (46:41)
Scarlett Johansson, Yeah.
George Chen (46:50)
for the AIs and that was quite a romantic movie, but the ending was not very, it was not a happy ending. So I’m trying to think like if you produce AI as a partner, you can empower AI to do more things, but then, know, whether eventually we will also enter some dangerous territory, you know, to have AI to have, then we need to talk more like ethical issues, right? You treat the AI more as a partner. There were discussions, know, yes, know, AI doesn’t have feeling.
But should we also ask the AI to work for like a nonstop? If you think about human rights, should, if human will work for like 10 hours, 12 hours, you should have to get a break. Why we shouldn’t just keep asking AI all these questions, keep them running the models to get the result. And even AI doesn’t have the touch feeling, does AI have emotional intelligence? I believe at some point that AI will have emotional intelligence. That is to say, if you use AI as sort of a slave.
they will also be unhappy. So back to my point, I don’t have a clear answer, but I’m still wondering which sort of status, like a category, we should put AI into, more as AI assistant or as ⁓ a human car parking.
Grace Shao (47:58)
I think that conversation can like, warrants another conversation on its own, that topic, because I think to your point on the technology aspect, we are seeing a shift from AI just to consumer and just as a chatbot to agent AI, right? So to your point, know, ⁓ AI can actually start completing tasks for you. They can be more proactive, remind you to do things. They are more like a thought partner versus an assistant. But again, to like, you know, to even the conversation we had earlier, it’s like who...
George Chen (48:01)
Ha ha!
Grace Shao (48:26)
Who can play God? Who is to say, where is the line, right? And your 10 to 12 hour work ethic thing is very Chinese and American. Definitely in Europe, people are not working 12 hours a day. That is like a, that is a normal work day for Americans and Chinese. But again, yeah.
George Chen (48:40)
That’s right. You already see the cultural difference here, even for the real world and for the AI in different parts of the world.
Grace Shao (48:46)
Right? So who gets to set the standards? And I think it will become harder and harder. It’s because then it becomes more philosophical and ethical than just, you know, ⁓ practical, which is right now what we’re talking about AI safety is just like, okay, child pornography is fundamentally wrong. Like, homicide videos is not allowed. Don’t create fake videos of people doing fake things. That is very black and white. We can almost just all universally agree.
But when it becomes a cultural, evidently because they’re culture norms, even language norms, societal norms, et cetera, right? Or even each person’s emotional capacity is even different. Then who gets to decide when AI needs to stop, right? That’s definitely like a very interesting topic. And I’ve been having this conversation with friends as well. It’s like, has technology hit a point of actually further development does not progress society as a whole anymore? Or are we still actually benefiting from technological advancements. So anyway, I really, really appreciate that. And I can go on forever. This is an interesting topic. Thank you so much for your time, George. I really, really appreciate your insights and all the expertise and experience you bring to us.
AI Proem is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.