Home Venture Capital Atom Capital: What kind of AI entrepreneurs impress us the most

Atom Capital: What kind of AI entrepreneurs impress us the most

23
0

Editor’s note: The following article was originally published on Medium. Although not directly travel-related, the interview spotlights Melissa Yang, formerly the co-founder, CTO and Vice Chairman of Tujia.com – which was famously known as China’s Airbnb alternative albeit with a wider focus. 

Yang is an investor and serial entrepreneur, as well as a Founding Partner at Atom Capital. Many of the insights on technology adoption and investment overlap with travel and hospitality, and we thought it was worth a repost.


In March and April this year, Geek Park (a top Chinese tech media company) conducted two interviews with active AI VC funds in China. Nearly ten investors from funds like Qiming Venture Partners, BlueRun Ventures, and Baidu Ventures, including Atom Capital in both sessions, shared insights on the AI industry, venture capital market, and AGI investments.

The rapid advancement of AI is reshaping the world, and the venture capital landscape is evolving. These interviews addressed many issues critical to AI entrepreneurs. Through these discussions, we aim to help entrepreneurs understand AI trends, what investors look for, how they make investment decisions, and the differences between venture capital practices in China and the US. Here, we have compiled and merged the content from Geek Park’s interviews to share with you.

What are the requirements for entrepreneurs starting businesses in the AI era?

Q: What challenges might AI entrepreneurs face today?

Melissa: We have engaged with many AI entrepreneurs from both China and the US, and noticed a widespread issue: many entrepreneurs have not clearly thought through what problems they aim to solve with AI, and they lack a deep understanding of the real-world application scenarios. Many entrepreneurs are technically strong and often opt to develop frameworks, failing to identify sharp entry points, which leads to product homogenization. Some experienced AI entrepreneurs, in their current ventures, are merely layering new technologies, like natural language interfaces, on top of their previous tech without fully exploiting the potential of new technologies. We understand that this may be related to sunk costs and path dependency. However, this wave of AI technological revolution is transformative, and entrepreneurs need to think outside the box. They should strive to view opportunities and tackle problems with fresh perspectives.

Moreover, I think some startups are not utilizing AI sufficiently internally. In this tech revolution, companies that innovate and enhance efficiency with internal AI use will increasingly distinguish themselves competitively.

Q: In the era of LLMs, which traits are particularly important for entrepreneurs?

Melissa: In the era of LLMs, we consider the following traits crucial for entrepreneurs:

Rapid Learning Ability: LLMs and their applications are in the very early stages with significant uncertainties. Founders need to be keen and flexible, quickly adjusting their strategies based on technological and industry changes.

Strategic Capability: During phases of rapid market changes, it is vital for founders to set the right direction and focus on key points. They must distinguish between opportunities with long-term value worthy of substantial resources and short-lived temptations that should be forsaken.

Innovative Capability: Innovation here arises from two areas: thinking out of the box and the ability to cross boundaries. The disruptive tech changes brought by LLMs require entrepreneurs to rethink how to solve problems and leverage completely new scenarios. This ability to innovate from the fringes of the market, where new technologies often emerge and become mainstream, is crucial. Additionally, exploring interdisciplinary fields with new methods can open up many new opportunities.

Q: Who is the most impressive entrepreneur you’ve encountered recently?

Melissa: Among the many entrepreneurs we’ve met, one particularly stands out — a founder we invested in last year who works in the LLM safety field. As LLMs grow in capability and application, there’s significant potential in this market. However, it’s a high-barrier field, requiring expertise in both AI and security — skills that are rare. This entrepreneur has deep experience and insights in both domains, understanding the core issues and how to leverage new-generation technologies to create viable products. In our interactions, I’ve been impressed by his rapid learning ability to grasp essential points based on market demands and feedback, timely adjust directions, and effectively manage the company’s development pace and key focuses at each stage. His industry experience, insights, quick learning, and strategic abilities are particularly commendable.

AI Industry: Reviewing 2023 and Forecasting 2024

Q: In 2023, which of your thoughts about AI investment thesis were confirmed, and which were disproved?

Melissa: One of our investment strategies has been to focus on areas we understand well and where we can create value. From the inception of our fund, we decided not to invest in LLMs. This decision wasn’t because LLMs lack value — on the contrary, their value in the industry chain is quite clear — but they’re not suitable for startups. LLMs require substantial capital support, which isn’t a strength for new funds like ours. We focus solely on investing in areas we can clearly understand and judge. Reflecting on the past year, two of our key judgments have been validated:

First, LLMs have facilitated horizontal specialization in industries. As foundational infrastructure, LLMs have spawned several layers including Infra, Agent Platforms, and End-user Applications, each corresponding to new entrepreneurial opportunities.

Second, the immense potential for growth in Agents. In February last year, we recognized that with the development of LLMs, Agents could become a massive industry in the future, and we strategically invested accordingly from the start.

There are still some areas we are observing and considering, such as the issues between open-source and closed-source.

In some areas, we previously chose not to invest because the market was unclear to us. For instance, in the AIGC domain, directions like text-to-image and text-to-video seemed quite homogenized and it was hard to discern any substantial barriers to entry. Until the emergence of Sora this year, which demonstrated that large models now have a foundation in physical models and possess some understanding of the physical world, providing a sustainable technological basis for AIGC (Artificial Intelligence General Comprehension). Therefore, multimodality will be one of our key focus areas in 2024.

Q: What are the key AI issues you’ve pondered the most this year?

Melissa: One enduring question that I’ve often pondered is about the limits and boundaries of AI. This question is crucial for entrepreneurs as it determines the direction of the industry track and its opportunities.

Additionally, I am particularly focused on the pace of AI implementation — how to transition the various capabilities of LLM demos into production environments to generate real business value. We’ve observed that, starting from the second half of last year, early-stage AI investments have slowed down both domestically and internationally. The speed of AI implementation greatly influences startup opportunities, development pace, and strategic decisions. As an early-stage VC fund, we need to anticipate and strategically prepare.

Specifically, we are paying close attention to several aspects:

The inferencing capabilities of foundational LLMs. This is currently a bottleneck for Autonomous Agents. To what extent can the inferencing capabilities of something like GPT-5 be elevated?

The memory structure of Agents. Future AI personalization will rely on memory, and its implementation could accelerate application deployment.

The development pace of open-source models and Chinese LLMs. Matching the performance of GPT-4 is an important milestone. Meta’s recently released Llama 3 is already close to GPT-4, and the progress in open-source has been quite impressive. We also expect domestic LLMs to reach GPT-4 levels by the end of this year.

Regarding AI implementation, I’d like to share some recent observations:

This round of AI technological revolution does indeed have Product-Market Fit (PMF); however, many of these PMFs are currently being implemented internally within large corporations.In discussions with friends from internet giants both domestically and internationally, we’ve discovered that they have effective implementation cases within their internal operations. But since these are applied internally to reduce costs and improve efficiency, and not offered externally as products or services, they remain unknown to the outside world. The practices of these large corporations demonstrate the commercial value of AI. As the technology matures and the costs of LLMs decrease, it will inevitably expand to benefit more businesses and consumers.

A new trend in AI application is emerging — from LLMs to composite artificial intelligence systems. Last year, the focus was on the capabilities of large models, but clearly, relying solely on LLMs is insufficient. At this stage, LLMs provide an “intelligent” foundation, but implementation requires combining LLMs with different technologies to build a composite AI system that orchestrates the models to meet demands and achieve optimal performance. We believe that more and more companies will use composite AI systems to improve the quality and reliability of AI applications and accelerate implementation, which may be one of the most important trends in AI for 2024.

Q: What AI product have you paid the most attention to in Q1 this year?

Melissa: The product I focused on the most in Q1 was OpenAI’s Sora, which impressed me most by further demonstrating the possibilities of multimodal models. If 2023 was the “Year of the Large Language Model,” we believe 2024 will be the “Year of the Multimodal Large Model.” With the introduction of the visual modality, we expect the intelligence of fundation models to climb another rung. On one hand, because vision has a stronger ability in certain aspects to abstract information, the bandwidth of interaction between LLMs and humans has increased significantly. On the other hand, the introduction of the visual modality brings in a time dimension, which naturally incorporates many causal logics into the models. This will enhance the LLMs’ understanding of the physical world, thereby improving their reasoning abilities. This year, we will be very focused on applications that leverage the multimodal capabilities of LLMs to unlock new scenarios/interactions.

We also have many questions about Sora, such as the limits of its physical model and visual capabilities, how to make the generated output videos more controllable and interactive, and in comparison to applications based on LLMs, they are effective partly because they have mechanisms like RAG to assist, but this is not yet available in the visual domain. While the demo is impressive, it might still be some distance away from practical implementation.

Q: You focus on AI investments in both the U.S. and China. From your perspective, what are the differences in AI entrepreneurship and investment between these two countries?

Melissa: From the industrial ecosystem perspective, the U.S. has a more mature division of labor. Many small companies that specialize in a particular field or aspect can thrive in the U.S. In China, large companies tend to dominate both upstream and downstream businesses.

There are also differences in the preferences of VC fund between the U.S. and China. If a project excels only in a specific area, U.S. investors might consider the investment more from a cost-benefit perspective. However, in China, investors might think such projects are too small in scale. This could primarily be due to differences in exit strategies, as China lacks a mature culture of M&A.

Support for early-stage startups also varies. In Silicon Valley, early-stage VC fund offer more comprehensive support to startups, including funding, mentorship, and customer resources. In contrast, support for early-stage startups is relatively less in China. This is something Atom Capital aims to address.

Additionally, regarding the investment pace in AI between China and the U.S., starting in mid-2023, U.S. AI investments also became more rational, which correlates with the stages of technology development — technology development is nonlinear. When a new technology like ChatGPT emerges, it triggers a wave of investment. However, as time progresses, investors realize the limitations of technology implementation, and investment becomes more rational.

In the long run, AI remains a significant matter. As long as investors can see AI enabling more scenarios, they will continue to invest. For example, in the U.S., the Fortune 500 companies have definite plans to invest in the AI sector; last summer, they already reserved a considerable budget for this new wave of AI technology.

The origins of Atom Capital and our investment thesis

Q: What impact has the emergence of LLMs had on your fund?

Melissa: The emergence of LLMs directly led to the creation of this fund. At the end of 2022, the release of ChatGPT greatly caught my attention. I view it as a disruptive technology that will have a long-term impact on industries. Based on this judgment, I decided to establish a fund focused on investing in the new generation of AI, investing in both the US and China, which is Atom Capital.

My initial intention was to leverage my entrepreneurial background, cross-border experience, resources, and technical advantages (Tsinghua Univestiy+ Microsoft) to provide a more forward-looking perspective and bring real assistance to early-stage entrepreneurs.

We position ourselves as a research-oriented fund. Since our establishment, we have conducted extensive industry research, parts of which have been published through the fund’s public channels(medium,official website,etc). The core purpose of conducting these studies is to identify long-term value opportunities brought by AI technology.

Identifying opportunities with long-term value involves considering two factors: one is the maturity of AI technology itself, and the other is whether AI can be implemented in industries to create actual value. AI technology development follows a clear trajectory, and by conducting in-depth research, we aim to grasp and predict its development direction and pace.

Currently, the AI industry is still in an early and rapidly developing stage, with new technologies and applications emerging constantly. In this process of moving targets, many directions that initially appear to be opportunities can quickly become obsolete, so it is essential to be keenly aware of technological iterations and frontier dynamics to uncover truly valuable opportunities.

On the other hand, the development of AI technology requires capital to accelerate its implementation, which needs a consensus in the venture capital community. Therefore, in this wave of AI technology, it is necessary for entrepreneurs, investors, and users to interact and shape each other highly to establish a consensus. As a research-oriented fund, we hope to participate in and contribute to this process of consensus building, and to find startups that align with this consensus, maximizing the efficiency of capital and resources.

Q: What rounds and stages will you focus on investing in, and what are the essential criteria in your investment standards?

Melissa: Atom primarily invests in early-stage startups (Seed-Series A). Our investment criteria include two essential conditions: first, a new generation of AI-native teams with a solid understanding of LLMs; second, teams with a clear understanding of the problem/scenario they intend to solve, able to maximize the capabilities of LLMs to address these issues.

Entrepreneurship itself must answer “how to solve whose problem.” This requires founders to have a deep insight into the industry, capable of identifying core problems that have been unresolved for a long time and using new AI technology to solve them. Once this problem is clearly thought through, the company’s long-term goals, business value, and the form and development path of the product become clear.

Q: What are Atom’s investment focuses this year?

Melissa: Currently, Atom’s focus is divided into three main directions: multimodal, open-source and the opportunities it brings, and Agent.

Firstly, Regarding multimodal: Video itself introduces a temporal dimension, which enables it to better express causal logic and enhance understanding of the physical world. Video contains a wealth of information, significantly increasing the communication bandwidth between users and LLMs. This enhancement in interactive bandwidth is expected to give rise to many new applications. For instance, information that is difficult to describe with words, such as engineering drawings, can now be input into LLMs through video. This will make many previously challenging tasks feasible and is expected to lead to an explosive growth in a range of applications.

Secondly, we are focusing on open source and the opportunities it brings. Currently, many LLMs adopt the MoE (Mixture of Experts) architecture, including Sora and Gemini 1.5 Pro. Unlike traditional Transformers that operate as a massive neural network, MoE models consist of many small “expert” neural networks, resulting in significantly optimized inference efficiency and cost.

We believe the proliferation of MoE architecture could disrupt the current closed-source and open-source landscape in the LLM field, bringing new development opportunities to open-source LLMs. Each expert model in MoE is very small, and the open-source community can completely “piece together” these modules; or based on an open-source MoE LLM, make corresponding optimizations to one or two expert models, thus enhancing its capabilities in a specific professional area. This significantly reduces the impact of the disadvantages of open-source models in terms of computing power, data, and capital resources on their development.

We will also continue to focus on the development of Agent Platforms and their application areas. Agent Platforms are also one of the key carriers for implementing multimodal applications.

This article was originally published on Medium

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here