Twelve Labs, a San Francisco, CA-based video understanding company, raised $10M in funding.
Backers included NVentures, Intel, and Samsung Next.
The company intends to use the funds to expand operations and its business reach.
Led by CEO Jae Lee, Twelve Labs provides video-to-text generative APIs powered by its latest video-language foundation model, Pegasus-1. This model would enable novel capabilities like Summaries, Chapters, Video Titles, and Captioning from videos – even those without audio or text– with the release of its public beta to extend the boundaries of what is possible. Organizations and developers can do things like retrieve a moment within hours of footage by describing that scene in text, or generate the relevant body text, be it titles, chapters, summaries, reports, or even tags from videos and incorporating the visual and audio just by prompting the model for it.
The company, which opens beta to the public after a private beta period, is headquartered in San Francisco, with an APAC office in Seoul. Over the course of its closed beta, in which more than 17,000 developers tested the platform, Twelve Labs worked to ensure a scalable, fast, and reliable experience.
FinSMEs
24/10/2023