0% found this document useful (0 votes)
6 views19 pages

Summarization Space AI Tools and Knowledge Profiles A Discussion

The discussion focuses on advancements in the summarization space, comparing tools like Fathom and Granola, and introducing a new tool that structures data into chapters for enhanced Q&A. It explores the potential of using recorded conversations to create dynamic data sets and knowledge profiles, aiming for a marketplace similar to Hugging Face. The conversation also emphasizes the importance of open-sourcing components and developing a feature-driven go-to-market strategy to foster platform growth.

Uploaded by

ivancherepukhin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views19 pages

Summarization Space AI Tools and Knowledge Profiles A Discussion

The discussion focuses on advancements in the summarization space, comparing tools like Fathom and Granola, and introducing a new tool that structures data into chapters for enhanced Q&A. It explores the potential of using recorded conversations to create dynamic data sets and knowledge profiles, aiming for a marketplace similar to Hugging Face. The conversation also emphasizes the importance of open-sourcing components and developing a feature-driven go-to-market strategy to foster platform growth.

Uploaded by

ivancherepukhin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Summarization Space, AI Tools, and Knowledge

Profiles: A Discussion
Origin
https://sv3yuw5snhpmfira.public.blob.vercel-storage.com/883da331-034e-4b3d-
9d0b-40e43542310c-qiuvvxH8EdTwms1pqQbSyVFY38aoeA.mp3

Abstract
The discussion centers on the summarization space, comparing products like
Fathom and Granola, and introducing a new summarization tool focused on
structuring data into chapters and interconnected clusters of meaning for im-
proved Q&A. The conversation explores the potential of using recorded conver-
sations and team communications to build dynamic data sets and knowledge
profiles, envisioning a marketplace similar to Hugging Face for these data sets.
The speakers discuss strategies for achieving critical mass of data sets, starting
with local team data and expanding to public data, and consider the potential
of AI in creating knowledge profiles and personalized experiences. The con-
versation also touches on go-to-market strategies, emphasizing a feature-driven
approach, and the value of open-sourcing components to grow a platform and
build a backbone for AI applications.

Contributors, Acknowledgements, Mentions


• Unknown
• Platogram, Chief of Stuff, Code Anyway, Inc.

Chapters
• Intro and Sharing Preferences [0] • Summarization Products and Fathom [1]
• Granola and Windows vs. Mac [2] • Windows Flexibility and Battery Life [3]
• Granola’s User Experience [4] • Structured Data and the Need for Rewriting
[5] • Granola’s Limitations and the Rewrite Approach [6] • Unlimited Context
and Dynamic Pipelines [7] • Building a Marketplace of Dynamic Data Sets [8] •
Starting Local and Indexing Existing Data [9] • Public Data Sets and Political
Data [10] • SoCap’s Matching Program and Knowledge Data [11] • Creat-
ing an AI Board of Directors [12] • Recording and Structuring Conversations
[13] • Personalized AI and Knowledge Profiles [14] • Bordly and AI-Powered
Networking [15] • Bordly’s Marketing Success vs. Product Value [16] • The
Limitations of AI-Driven Introductions [17] • Bordly’s Background and the AI
Girlfriend Approach [18] • AI People and the Go-to-Market Strategy [19] •
Future Feature Market Fit and Controlled Environments [20] • Summarization
as a Feature and Channel Fit [21] • API-First Approach and the Context Back-
bone [22] • Open Sourcing and Templated AI Agents [23] • Browserbase and
Mainframe [24] • Mainframe and Local Q&A Models [25] • Outro [26]

1
Introduction
Okay, if we see something that’s worth sharing, we’ll just do that [0][1]. If
not, you know, we’ll just keep that between us [2][3]. And yeah, if you have
like recording, like the whole thing, you know, we can process using Shrink [4].
Exactly [5]. And then, and then you could, you could just download the video,
right, and, and process it using your stuff now [6]. Yeah, yeah [7]. So I have
my own little product here now, so, yeah, we could just upload it there [8]. You
will see like, the structured output [9]. So we’re kind of like competing in the
summarization space, obviously, so [10].

Discussion
Intro and Sharing Preferences
Okay. [0] If we see something that’s, that’s worth sharing, we’ll just do that. [1]
If not, you know, we’ll just keep that between us. [2]

Granola and Windows vs. Mac


Right. [3] And yeah, if you have like recording, like the whole thing, you know,
we can process using Shrink. [4] Exactly. [5] And then, and then you could, you
could just download the video, right, and, and process it using your stuff now.
[6]

Granola’s Limitations and the Rewrite Approach


Yeah, yeah. [7] So I have my own little product here now, so, yeah, we could
just upload it there. [8] You will see like, the structured output. [9] So we’re
kind of like competing in the summarization space, obviously, so. [10]

Public Data Sets and Political Data


Yeah. [11] By the way, have you seen any products in like, summarization space
or like, anything for calls? [12] Like, do you use any, you know, call recordings,
any bots for like, you know, Zoom meetings, whatever? [13] I, I use Fathom, I
think mainly because when I was looking for, for a tool to use, they were the
top of mind and they, they, they’re also a part of YC community. [14] And so
they had a secret deal for all of the YC founders where essentially you’re able to
get a team plan for a year for free. [15] And so like, I signed up, started using it.
[16] I can’t say that I’m like constantly returning to the notes. [17] I’d say, like,
most of the times I just like knowing that, okay, you know, if anything there is
a recording I can share. [18]

Bordly’s Background and the AI Girlfriend Approach


But I have been very envious of people who are on Mac and can use Granola
because everyone who told me about Granola told me that they are really great

2
and they don’t require a bot setup, which I like, because I don’t think that a
bot setup is the way to go, to be honest, in the future. [19] And so, yeah, I’m
just waiting for Granola to release something on Windows so that I can be able
to use it as well. [20] What the benefits? [21] You know, like, I can show you I
have one really. [22] I have like, lots of devices here, but like, I have only one
device here running Windows. [23] And you will see it in a second. [24] You’ll
get the why, you know, because this machine is like, you know, this is a beast.
[25] Yeah, this is just. [26] Yeah, yeah, that’s like really at the top of the line,
you know. [27] Razer for gaming. [28] Yeah, that’s, that’s the only why. [29]
But like, any other reasons except the Razer? [30]

Outro
Yeah, I mean, I don’t know. [31] Like, I, I’ve always been. [32] Been a Windows
user, I guess, and like, like for some time, I guess because of the, because of
Mac’s speed and performance. [33] Like when they released M M processors and
started kind of doubling down on that. [34] I saw some, some, some difference,
but probably not big enough difference for me to make the switch because I,
I just got too comfortable with, with my setup when the Windows processors
also made a big jump. [35] Like for example, now I’m using Surface, the latest
edition. [36] In terms of snappiness, it’s the same, right? [37] So like my wife
uses Mac, it’s her work laptop and it’s the same. [38] It’s literally the same. [39]
And so now for me it’s just a matter of convenience because processing speed,
power, all the same. [40] Even the form factor, very, very similar. [41] The build
quality as well. [42] And so the only difference I guess is not having some of the
hipster apps like Granola or maybe some others. [43] Not that many. [44]
Yeah. [45] And so all in all, I don’t know, I just don’t see too big of a push to
make the switch given that. [46] I don’t know, I’m, you know, I’m using iPhone,
but you know, it’s fine, right? [47] I’m not, I’m not like a super big Apple
geek or something, so. [48] And actually like that on Windows, you know, you
just have more flexibility on how you could use the system and how you could
customize the system and also like just more flexibility in terms of again like
running games in terms of running models on device as well. [49] Although you
could do that on Mac too. [50] But I do use some, some apps that, that are not
on Mac. [51] So. [52] Yeah, I guess just a combination of reasons, but the main
one is just, yeah, I don’t feel the need to switch for some reason. [53] There’s
like no charger whatsoever. [54] And on Windows I never. [55] But now it’s not
an issue anymore because like the new processors, yeah, the new processors in
Windows, they’re amazing. [56]
Yeah. [57] Because you know that Razer, it’s like top of the line razor. [58] But
then even if it’s not gaming, you know, even if it’s just something low key like
watching videos even, it’s gonna be like two hours, you know, and it’s getting
so hot. [59] It’s just insane. [60] This like, you know, you can cook something
on top of this, like on top of a laptop. [61] Yeah, it’s insane. [62] But yeah, just,
you know, M2 is incredible in terms of just battery, I guess. [63] And the other,

3
you know, I have the imac. [64] Actually it’s like 60 years old maxed machine.
[65] But then, yeah, it’s just maxed with 64 gigs of RAM and you know, it’s,
it’s enough. [66] So I get it. [67] I Get it. [68] But, you know, you’re right about
Granola and like the whole user experience, they just bumped up one feature
that. [69]
Yeah. [70] In terms of like, you know, the whole user flow, you don’t need to do
the bots. [71] You can use the system recording stuff and. [72] Yeah, it’s been
there for like years that everybody could do that, but just Granola, the first
team who actually delivered, you know, the usable product. [73] Yeah, I guess
it’s all about the use. [74] Yeah, it’s kind of part of the deck. [75] I don’t recall
if I shared the deck with you, but like the whole idea, you know, that when
we talk about the basics, you know, you need structured data as input data.
[76] You want to get some answers from any system. [77] You know, ChatGPT,
Gemini, you know, Perplexity, they’re all structured the same. [78] It’s a Q and
A machine. [79] You know, you go there, you seeking for answers. [80]
Yeah. [81] So you need the structured data as an input. [82] So I’m seeing
right now that, you know, all of the recordings, any conversations like this
one. [83] Yeah, obviously that’s a great input data. [84] But then you need
to store it somewhere and then you need to transform this because it’s just a
raw conversation. [85] Obviously it’s going to be in transcript, but then it’s not
exactly the same. [86] The quality of the input, completely responsible for the
quality of the output. [87] Of course, that’s kind of the big deal. [88] So we
are not just about summarization, we’re about the rewrite. [89] So what are we
doing exactly is just structure into chapters, into connected clusters of meaning.
[90] And then. [91] Yeah, if we’re talking about certain things, like Granola,
and we talked a few bits in the beginning, then we’re talking in the middle and
that’s going to be a few things in the end of the call. [92] That’s going to be
the part just about the and all. [93] I will show the example when we have the
recording in place. [94] But that’s the idea. [95] If we talk about certain topics
and specifically like team communications, if we talk about certain topics across
multiple calls, it’s going to be all interconnected. [96]
So then, yeah, you can just import this into any system whatsoever, like Per-
plexity, any system that good with rag pretty much. [97] And get your answers.
[98] So, like, what’s the progress on marketing across the last 10 calls with my
team, for example, so. [99] And Granola was going kind of that direction. [100]
So you can record them without the bot. [101] That’s great. [102] Then you
can use those recordings for the Q and A. [103] But like the problem, you know,
it’s it’s not continuous. [104] You cannot go this over the call. [105] You cannot,
you know, do anything with the. [106] Maybe they’re going there, I don’t know.
[107] But like the current version is very limited and what they’re actually doing,
they’re not giving you the full output, they’re giving like the summarized one,
you know, and all of the summarizations I’ve seen like the best ones out there,
they’re all pretty limited. [108] You know, it’s few paragraphs of text and then
you’re trying to fit there, I don’t know, two hours of conversation, for example.

4
[109] So what do we provide is more of a rewrite. [110] So again the full meaning,
but just very structured. [111] And then yeah, if it’s conversations, whatever,
you know, any interviews, then yeah, it’s going to be hundreds of pages, that’s
fine. [112] You know, for the rag, you can input thousands of pages already.
[113] So any of the large scale outputs can be reused as input data. [114]
So. [115] And yeah, we’re pretty much thinking that that’s a great system
to get all of the data in the same format and then on scale that’s unlimited
context. [116] And with what we call, you know, context delivery systems right
now, it’s the idea that, you know, you don’t actually need to rely on the pre
trained models. [117] If you have great pipeline in place and if you need certain
vertical, certain data, like financial data, you need just a pipeline. [118] You
need CIS filings, you need some earning recordings, whatever. [119] All right, so
you need teams, conversations. [120] Yeah, that’s just different pipelines. [121]
So we’re kind of in the process of first getting this data and then second make
it shareable, make kind of like a marketplace. [122] So end of the product is
going to be more of a hugging face. [123] But for those data sets and the data
sets themselves going to be more dynamic. [124] So the whole idea, you need
dynamic pipelines of data. [125]
Yes. [126] What’s happening within your team on the scale and continue to
happen every day. [127] So that’s the idea. [128] But so to have a marketplace
data sets, you need to first get to critical mass of those data sets. [129] How are
you guys thinking about going to that point? [130] So there are two things. [131]
First, it’s definitely local data sets for your team. [132] You have recordings and
lots of teams using Zoom and they store it somewhere. [133] Even in the cloud.
[134] We can process those, just index of the calls you had over the years and
then use those as input. [135] There are lots of data just stored somewhere and
not reused at all. [136] So that’s true. [137] That’s true, yeah, yeah. [138] So
first is definitely converting all of the knowledge that’s been hidden in those
files. [139] Lots of interviews, lots of YouTube videos, lots of calls, et cetera.
[140] It’s just not indexed, it’s not processed. [141] You know, so sometimes
there’s certain like, you know, separated pages. [142] It’s flying. [143] Yeah,
yeah, you can find like, you know, phantom transcripts for every video, but
then it’s not interconnected again. [144] So that’s the idea. [145] So rewrite,
build the data set, reuse what’s been already recorded. [146] Build, you know,
so, and again, locally, if we’re talking about, you know, any just, just local
teams, there shouldn’t be even connected. [147] You know, we don’t need to
get to the marketplace as a step one. [148] Step one, just make it for you, your
team, you know, it could be your personal context. [149] Then on a slightly
larger, you know, container is going to be your team’s context and then the
step two is going to be, okay, how can we bring more public data sets from
some institutions, from certain partners. [150] So like where certain, with Yale
Medical School, they have like, you know, Dr. [151] Rounds, they’re covering
certain medications, you know, and et cetera. [152] So that’s going to be more
of a. [153]

5
Okay. [154] There are certain things happening. [155] By the way, one of
the topics in that direction is definitely going to be about politics because so
many politicians are talking about so many things. [156] How do you find this
data? [157] You can’t, you know, if you’re asking ChatGPT about latest, you
know, Trump policy, good luck. [158] So yeah, that’s exactly the direction we’re
going. [159] You know, starting small, starting local, starting with just team
communications, for example, but indexing the past and indexing the current
data and then bringing the same approach but on a larger scale and specifically
building the public data sets which could be, could be used by anybody. [160]
Yeah, it’s interesting because one of the things we’ve been like when it comes to
SoCAP and what we’re now exploring, we recently started matching thousands
of startups that we’ve been getting as part of the funnel, just using SoCap
for their own needs, with experienced founders who we have on the other end,
who we personally vetted. [161] And we know that they would love to get a
couple more kind of project startups. [162] They can help as advisors, as long
term advisors. [163] And so as we’re doing this, we see that our value, SOCAP
value beyond matching goes first of all into allowing both parties to use this
streamlined social graph, meaning advisors. [164] Social data, like connections
Data is in SolCap and so founders, startups can easily see who they can make
introduction to, for biz dev, fundraising, for hiring something else, and then vice
versa as well. [165] But then the other part of the data that we haven’t been
doing anything with, but we want to is knowledge data. [166]
Right. [167] Because the way we’re thinking about it, there is this kind of long
shot idea that sounds really interesting of basically creating an AI board of
directors that is based on the social graph, the knowledge graph, the experience
graph of those specific folks who have already been there, done it, and we already
have part of it covered with social graph, but we don’t have the knowledge and
experience graph covered as well. [168] So what we’ve been thinking about is
basically like finding a format where all of the conversations between, between
the successful matches, meaning between advisors and between startups, the
office hours, the one on ones, the group sessions, the Q&As, all of these things
would be recorded. [169] And then we’ll probably need to use something like
you guys are building to, to be able to structure it, standardize it more or less
in terms of the, in terms of the, the preparation for rug and then just making it
available as, as chatbot, making it available as just learnings as, I don’t know,
transform it into parts of like playbook. [170]
Right. [171] When it comes to answering specific queries such as, such as how
should you think about hiring your first C level employee or something like
this. [172] Right. [173] And then ideally we want to have not just like random
advice from the Internet, but like actual specific advice that has been generated
through those conversations and like, you know, one on one help removing some
of the more personal things

6
Conclusion
So, yeah, we’re definitely going up to knowledge profiles [201]. And by the way,
have you seen the new product in your space, it’s called Bordly [202]? It’s been
going like wild on LinkedIn [203]. “They’re interested to use us to process the
call calls” [207]. “So yeah, they do like the basic job, really great. It’s just
they’re not able to provide any, you know, like, like after call summaries or
anything to emails. That’s what we do right now” [208, 209, 210].
“Like you’re looking at this from your perspective. I’m looking at this from my
perspective as well” [217, 218]. “And so for me the real value when it comes to
networking and when it comes to connections is actually in how you process and
use your own connections and how you get value from it, et cetera” [232]. “And
the further we go, I think the harder it would be if it’s not a specific person we
trust making the introduction by a random AI bot” [234].
“Have you guys been thinking about maybe using a similar, a similar kind of
like go to market approach to some of your products? Meaning positioning the
products as like AI people, AI employees and then doing something around it.
Have you been thinking about it” [250, 251]? “We are kind of building less in,
in a consumer direction right now” [253]. “It’s more about artifacts, you know,
and building those artifacts to actually prove some, some, some theories, you
know, and just work with people in certain” [256, 257]. “So the whole idea of
this, you know, it’s, it’s even pre, you know, startups and stuff, it’s, it’s more
about what I call future feature market fit, you know, so not like a product, but
like feature or even like feature channel fit” [259, 260].
“With shrink, the AI is just one feature. We are talking about summarization.
We can deliver superior summarization. That’s a superpower. But you’re not
sure if that’s a whole product” [266, 267, 268, 269, 270]. “So that’s the whole
idea here, makes sense” [291]. “But still it’s very small experiments” [293].
“We can just deliver the API for the context” [299]. “I would say first we’re
building the backbone for those applications and then yeah, if we want to try the
applications ourselves, build the examples, for example, or even put something
open source like the core tech” [301, 302]. “So building some experiments that
could be run on your foundation” [308, 309, 310]. “I mean open sourcing some,
some little bits and pieces I think is certainly a great, a great way to, to grow
your platform. Just make more people try it, see it and whatnot” [312, 313].
“Awesome” [332].

References
1. Okay.
2. If we see something that’s, that’s worth sharing, we’ll just do that.
3. If not, you know, we’ll just keep that between us.
4. Right.
5. And yeah, if you have like recording, like the whole thing, you know, we
can process using Shrink.

7
6. Exactly.
7. And then, and then you could, you could just download the video, right,
and, and process it using your stuff now.
8. Yeah, yeah.
9. So I have my own little product here now, so, yeah, we could just upload
it there.
10. You will see like, the structured output.
11. So we’re kind of like competing in the summarization space, obviously, so.
12. Yeah.
13. By the way, have you seen any products in like, summarization space or
like, anything for calls?
14. Like, do you use any, you know, call recordings, any bots for like, you
know, Zoom meetings, whatever?
15. I, I use Fathom, I think mainly because when I was looking for, for a tool
to use, they were the top of mind and they, they, they’re also a part of
YC community.
16. And so they had a secret deal for all of the YC founders where essentially
you’re able to get a team plan for a year for free.
17. And so like, I signed up, started using it.
18. I can’t say that I’m like constantly returning to the notes.
19. I’d say, like, most of the times I just like knowing that, okay, you know, if
anything there is a recording I can share.
20. But I have been very envious of people who are on Mac and can use
Granola because everyone who told me about Granola told me that they
are really great and they don’t require a bot setup, which I like, because I
don’t think that a bot setup is the way to go, to be honest, in the future.
21. And so, yeah, I’m just waiting for Granola to release something on Win-
dows so that I can be able to use it as well.
22. What the benefits?
23. You know, like, I can show you I have one really.
24. I have like, lots of devices here, but like, I have only one device here
running Windows.
25. And you will see it in a second.
26. You’ll get the why, you know, because this machine is like, you know, this
is a beast.
27. Yeah, this is just.
28. Yeah, yeah, that’s like really at the top of the line, you know.
29. Razer for gaming.
30. Yeah, that’s, that’s the only why.
31. But like, any other reasons except the Razer?
32. Yeah, I mean, I don’t know.
33. Like, I, I’ve always been.
34. Been a Windows user, I guess, and like, like for some time, I guess because
of the, because of Mac’s speed and performance.
35. Like when they released M M processors and started kind of doubling
down on that.

8
36. I saw some, some, some difference, but probably not big enough difference
for me to make the switch because I, I just got too comfortable with, with
my setup when the Windows processors also made a big jump.
37. Like for example, now I’m using Surface, the latest edition.
38. In terms of snappiness, it’s the same, right?
39. So like my wife uses Mac, it’s her work laptop and it’s the same.
40. It’s literally the same.
41. And so now for me it’s just a matter of convenience because processing
speed, power, all the same.
42. Even the form factor, very, very similar.
43. The build quality as well.
44. And so the only difference I guess is not having some of the hipster apps
like Granola or maybe some others.
45. Not that many.
46. Yeah.
47. And so all in all, I don’t know, I just don’t see too big of a push to make
the switch given that.
48. I don’t know, I’m, you know, I’m using iPhone, but you know, it’s fine,
right?
49. I’m not, I’m not like a super big Apple geek or something, so.
50. And actually like that on Windows, you know, you just have more flexi-
bility on how you could use the system and how you could customize the
system and also like just more flexibility in terms of again like running
games in terms of running models on device as well.
51. Although you could do that on Mac too.
52. But I do use some, some apps that, that are not on Mac.
53. So.
54. Yeah, I guess just a combination of reasons, but the main one is just, yeah,
I don’t feel the need to switch for some reason.
55. There’s like no charger whatsoever.
56. And on Windows I never.
57. But now it’s not an issue anymore because like the new processors, yeah,
the new processors in Windows, they’re amazing.
58. Yeah.
59. Because you know that Razer, it’s like top of the line razor.
60. But then even if it’s not gaming, you know, even if it’s just something low
key like watching videos even, it’s gonna be like two hours, you know, and
it’s getting so hot.
61. It’s just insane.
62. This like, you know, you can cook something on top of this, like on top of
a laptop.
63. Yeah, it’s insane.
64. But yeah, just, you know, M2 is incredible in terms of just battery, I guess.
65. And the other, you know, I have the imac.
66. Actually it’s like 60 years old maxed machine.
67. But then, yeah, it’s just maxed with 64 gigs of RAM and you know, it’s,

9
it’s enough.
68. So I get it.
69. I Get it.
70. But, you know, you’re right about Granola and like the whole user expe-
rience, they just bumped up one feature that.
71. Yeah.
72. In terms of like, you know, the whole user flow, you don’t need to do the
bots.
73. You can use the system recording stuff and.
74. Yeah, it’s been there for like years that everybody could do that, but
just Granola, the first team who actually delivered, you know, the usable
product.
75. Yeah, I guess it’s all about the use.
76. Yeah, it’s kind of part of the deck.
77. I don’t recall if I shared the deck with you, but like the whole idea, you
know, that when we talk about the basics, you know, you need structured
data as input data.
78. You want to get some answers from any system.
79. You know, ChatGPT, Gemini, you know, Perplexity, they’re all structured
the same.
80. It’s a Q and A machine.
81. You know, you go there, you seeking for answers.
82. Yeah.
83. So you need the structured data as an input.
84. So I’m seeing right now that, you know, all of the recordings, any conver-
sations like this one.
85. Yeah, obviously that’s a great input data.
86. But then you need to store it somewhere and then you need to transform
this because it’s just a raw conversation.
87. Obviously it’s going to be in transcript, but then it’s not exactly the same.
88. The quality of the input, completely responsible for the quality of the
output.
89. Of course, that’s kind of the big deal.
90. So we are not just about summarization, we’re about the rewrite.
91. So what are we doing exactly is just structure into chapters, into connected
clusters of meaning.
92. And then.
93. Yeah, if we’re talking about certain things, like Granola, and we talked
a few bits in the beginning, then we’re talking in the middle and that’s
going to be a few things in the end of the call.
94. That’s going to be the part just about the and all.
95. I will show the example when we have the recording in place.
96. But that’s the idea.
97. If we talk about certain topics and specifically like team communications,
if we talk about certain topics across multiple calls, it’s going to be all
interconnected.

10
98. So then, yeah, you can just import this into any system whatsoever, like
Perplexity, any system that good with rag pretty much.
99. And get your answers.
100. So, like, what’s the progress on marketing across the last 10 calls with my
team, for example, so.
101. And Granola was going kind of that direction.
102. So you can record them without the bot.
103. That’s great.
104. Then you can use those recordings for the Q and A.
105. But like the problem, you know, it’s it’s not continuous.
106. You cannot go this over the call.
107. You cannot, you know, do anything with the.
108. Maybe they’re going there, I don’t know.
109. But like the current version is very limited and what they’re actually doing,
they’re not giving you the full output, they’re giving like the summarized
one, you know, and all of the summarizations I’ve seen like the best ones
out there, they’re all pretty limited.
110. You know, it’s few paragraphs of text and then you’re trying to fit there,
I don’t know, two hours of conversation, for example.
111. So what do we provide is more of a rewrite.
112. So again the full meaning, but just very structured.
113. And then yeah, if it’s conversations, whatever, you know, any interviews,
then yeah, it’s going to be hundreds of pages, that’s fine.
114. You know, for the rag, you can input thousands of pages already.
115. So any of the large scale outputs can be reused as input data.
116. So.
117. And yeah, we’re pretty much thinking that that’s a great system to get all
of the data in the same format and then on scale that’s unlimited context.
118. And with what we call, you know, context delivery systems right now, it’s
the idea that, you know, you don’t actually need to rely on the pre trained
models.
119. If you have great pipeline in place and if you need certain vertical, certain
data, like financial data, you need just a pipeline.
120. You need CIS filings, you need some earning recordings, whatever.
121. All right, so you need teams, conversations.
122. Yeah, that’s just different pipelines.
123. So we’re kind of in the process of first getting this data and then second
make it shareable, make kind of like a marketplace.
124. So end of the product is going to be more of a hugging face.
125. But for those data sets and the data sets themselves going to be more
dynamic.
126. So the whole idea, you need dynamic pipelines of data.
127. Yes.
128. What’s happening within your team on the scale and continue to happen
every day.
129. So that’s the idea.

11
130. But so to have a marketplace data sets, you need to first get to critical
mass of those data sets.
131. How are you guys thinking about going to that point?
132. So there are two things.
133. First, it’s definitely local data sets for your team.
134. You have recordings and lots of teams using Zoom and they store it some-
where.
135. Even in the cloud.
136. We can process those, just index of the calls you had over the years and
then use those as input.
137. There are lots of data just stored somewhere and not reused at all.
138. So that’s true.
139. That’s true, yeah, yeah.
140. So first is definitely converting all of the knowledge that’s been hidden in
those files.
141. Lots of interviews, lots of YouTube videos, lots of calls, et cetera.
142. It’s just not indexed, it’s not processed.
143. You know, so sometimes there’s certain like, you know, separated pages.
144. It’s flying.
145. Yeah, yeah, you can find like, you know, phantom transcripts for every
video, but then it’s not interconnected again.
146. So that’s the idea.
147. So rewrite, build the data set, reuse what’s been already recorded.
148. Build, you know, so, and again, locally, if we’re talking about, you know,
any just, just local teams, there shouldn’t be even connected.
149. You know, we don’t need to get to the marketplace as a step one.
150. Step one, just make it for you, your team, you know, it could be your
personal context.
151. Then on a slightly larger, you know, container is going to be your team’s
context and then the step two is going to be, okay, how can we bring more
public data sets from some institutions, from certain partners.
152. So like where certain, with Yale Medical School, they have like, you know,
Dr.
153. Rounds, they’re covering certain medications, you know, and et cetera.
154. So that’s going to be more of a.
155. Okay.
156. There are certain things happening.
157. By the way, one of the topics in that direction is definitely going to be
about politics because so many politicians are talking about so many
things.
158. How do you find this data?
159. You can’t, you know, if you’re asking ChatGPT about latest, you know,
Trump policy, good luck.
160. So yeah, that’s exactly the direction we’re going.
161. You know, starting small, starting local, starting with just team communi-
cations, for example, but indexing the past and indexing the current data

12
and then bringing the same approach but on a larger scale and specifically
building the public data sets which could be, could be used by anybody.
162. Yeah, it’s interesting because one of the things we’ve been like when it
comes to SoCAP and what we’re now exploring, we recently started match-
ing thousands of startups that we’ve been getting as part of the funnel,
just using SoCap for their own needs, with experienced founders who we
have on the other end, who we personally vetted.
163. And we know that they would love to get a couple more kind of project
startups.
164. They can help as advisors, as long term advisors.
165. And so as we’re doing this, we see that our value, SOCAP value beyond
matching goes first of all into allowing both parties to use this streamlined
social graph, meaning advisors.
166. Social data, like connections Data is in SolCap and so founders, startups
can easily see who they can make introduction to, for biz dev, fundraising,
for hiring something else, and then vice versa as well.
167. But then the other part of the data that we haven’t been doing anything
with, but we want to is knowledge data.
168. Right.
169. Because the way we’re thinking about it, there is this kind of long shot
idea that sounds really interesting of basically creating an AI board of
directors that is based on the social graph, the knowledge graph, the
experience graph of those specific folks who have already been there, done
it, and we already have part of it covered with social graph, but we don’t
have the knowledge and experience graph covered as well.
170. So what we’ve been thinking about is basically like finding a format where
all of the conversations between, between the successful matches, meaning
between advisors and between startups, the office hours, the one on ones,
the group sessions, the Q&As, all of these things would be recorded.
171. And then we’ll probably need to use something like you guys are building
to, to be able to structure it, standardize it more or less in terms of the,
in terms of the, the preparation for rug and then just making it available
as, as chatbot, making it available as just learnings as, I don’t know,
transform it into parts of like playbook.
172. Right.
173. When it comes to answering specific queries such as, such as how should
you think about hiring your first C level employee or something like this.
174. Right.
175. And then ideally we want to have not just like random advice from the
Internet, but like actual specific advice that has been generated through
those conversations and like, you know, one on one help removing some of
the more personal things or like, you know, maybe like hiding some of the
more personal things that people would, would have not wanted to share.
176. Yeah.
177. So, and kind of my prediction, you know, that today we have like very
standalone experience.

13
178. You go into ChatGPT, it has zero understanding about your own context,
you know, who you are or what you’re doing.
179. It’s just okay, you’re sending something to the chat.
180. It gives you the same experience it will give any other user, right?
181. Yes, but in the future.
182. Yeah, I believe.
183. But you know, those conversations should enrich your own profile and give
me better understanding of who you are as a person chatting to this.
184. So like the very basics, you know, like if you’re a second grader and you
chat into ChatGPT, it should give you answers as to the second grader.
185. So we’re actually doing another pilot in educational space.
186. It’s like Polytech Institute, you know, very mathematical.
187. So in North Carolina.
188. So we’re like adding kind of like valuation pipeline, like a valuation loop
there.
189. So first we have all the sessions, all of the lectures, all the materials indexed,
then students taking the tests, how good they’re in math and certain
topics.
190. And yes, we are able to then evaluate.
191. Okay, we have every student’s level.
192. And when students communicate and students go for the program, it’s
very easier to adopt what kind of answers they’re getting from the systems,
what kind of material they needed for homeworks and stuff.
193. So yeah, that’s exactly what we see in that we can use this data processed
from the users to actually build the knowledge profiles.
194. And I do believe that Knowledge profile is going to be a very common
thing.
195. But again it should be something more accessible.
196. It shouldn’t be like the private architectural concept today, just for one
company, just for one system.
197. So the whole approach to the marketplace, it should be like Linux, like an
open standard where everybody can create a data set and work with this
and then users could have a profiles.
198. Right.
199. And you can just get an easy validation based on users interactions with
the data.
200. What do we know about the users?
201. You know, so that’s very interesting thing.
202. Yeah, so we’re definitely going up to knowledge profiles.
203. And by the way, have you seen the new product in your space, it’s called
Bordly.
204. It’s been going like wild on LinkedIn.
205. Oh, Bordly, yes.
206. Have you tried it on LinkedIn?
207. Yeah, I tried actually having a call with one of the founders in a week or
so.

14
208. They’re interested to use us to process the call calls.
209. So yeah, they do like the basic job, really great.
210. It’s just they’re not able to provide any, you know, like, like after call
summaries or anything to emails.
211. That’s what we do right now.
212. Already.
213. So yeah, and they kind of want to improve on certain scale but like the
basics, you know, pretty solid.
214. Like the whole experience of getting understanding you through the call,
that’s.
215. Yeah, that, that’s exactly the, the point.
216. It’s just then you need more calls and reach this and that’s where they
need some support, you know, and help.
217. Interesting.
218. Like you’re looking at this from your perspective.
219. I’m looking at this from my perspective as well.
220. And for me, for me the interesting case about body is that I think they
really did an amazing job with marketing, with identifying this super
hooked that allowed them to, to go viral around this idea of AI raising
around.
221. So no one has done that before and so you know, it has been very smart
to still get that kid out.
222. Yeah, it’s Getting like it’s, it’s true that it worked for them, right?
223. It worked for them, yeah.
224. Because I’ve been following them for a while and, and that was the thing
that allowed them to like push into where most people heard about them.
225. Right.
226. But when it comes to the actual value, the actual product, even like not
talking about how well they perform now I know that for many or even
most people hasn’t really performed well in terms of making intros and
stuff.
227. I got a random Jewish founder from like Israel, you know, just.
228. Absolutely.
229. Yeah, I mean he was a chill guy but like zero things in common and he
was like just pitching some things to me because I had this slightly VC
background and like, hey, whatever.
230. Even setting that aside, my, my perspective here is that this niche of
matching people together for like new conversations and stuff, almost like
reincarnation of launch club or whatnot, it’s definitely like a useful niche.
231. But I haven’t seen any meaningful long term products evolving from it in
the professional context.
232. In this, in the social context, obviously there are a ton of dating apps and
stuff like that, but it’s a bit different, I’d say.
233. And so for me the real value when it comes to networking and when it
comes to connections is actually in how you process and use your own
connections and how you get value from it, et cetera.

15
234. Because meeting new people would always be like this new extra thing
that you could either get something from or not.
235. And the further we go, I think the harder it would be if it’s not a specific
person we trust making the introduction by a random AI bot.
236. And so that has been my biggest concern about, about Boardy.
237. I haven’t seen them moving into more or less similar direction we’re moving
into with SoCAP.
238. And I’m just not sure if, if where they are there is enough, there is enough
long term business, enough long term value to, to create.
239. But it’s certainly a great, a great hype story, right?
240. Yeah, but, but, but again so you know, initially they weren’t about any
market or whatever.
241. It’s just the second time founder who just.
242. Okay, he had his background in like you know, large scale studio business
so he had enough credentials to just fundraise for his network.
243. So you know, the story is very, very basic.
244. You know, he was quite open about that.
245. Like you know, kudos to him but he could push himself the same narrative
about you know, born doing this whole thing.
246. Yes, but like he even directly sharing.
247. Yeah, I had connections in the space and I just reached out to my connec-
tions and then Borde handled the rest, whatever the rest is.
248. But yes, in terms of a market, yes, it’s, it’s a less above market.
249. It’s just about him, you know, starting new company, whatever.
250. But yeah, definitely interesting direction was kind of like AI girlfriend
approach, you know, having the conversation and getting something from
the conversation as a data.
251. Have you guys been thinking about maybe using a similar, a similar kind
of like go to market approach to some of your products?
252. Meaning positioning the products as like AI people, AI employees and
then doing something around it.
253. Have you been thinking about it?
254. You know, we are kind of building less in, in a consumer direction right
now.
255. So it’s, we need the end points to actually showcase and do the demos.
256. But like, you know, we’ve taken the approach of South Park Commons.
257. I really love those folks.
258. It’s more about artifacts, you know, and building those artifacts to actually
prove some, some, some theories, you know, and just work with people in
certain.
259. So art.
260. Yeah.
261. So the whole idea of this, you know, it’s, it’s even pre, you know, startups
and stuff, it’s, it’s more about what I call future feature market fit, you
know, so not like a product, but like feature or even like feature channel
fit.

16
262. You know, you build on feature for a specific channel and this is your like,
like it’s not an mvp, you know, MVP is later.
263. But like first it’s just getting one channel, getting one feature, building the
artifact, testing this, seeing how well the feature is doing, you know, and
then getting some data from this and making some data driven decisions.
264. So for me, yeah, it’s more about controlled environment.
265. Talking about controlled channel with certain features which could be
delivered just.
266. Okay.
267. With shrink, the AI is just one feature.
268. We are talking about summarization.
269. We can deliver superior summarization.
270. That’s a superpower.
271. But you’re not sure if that’s a whole product.
272. Right.
273. And so that’s, that’s why you want to position.
274. Yeah, exactly.
275. Exactly.
276. Yeah.
277. We’re not sure.
278. We didn’t have the user experience around this.
279. We just, you know, we just brought this to the people.
280. Okay.
281. First we started with summarization.
282. Then, okay, people can get the summarization async, for example.
283. They don’t need to wait, you know, they can just somehow bring the files
to us and then they get in the emails and we saw that.
284. Okay, summarization plus emails, that’s the channel, you know, and we’re
like, okay, that, that’s how it sounds.
285. Fun.
286. Yeah.
287. And from that we get some data that People love this, but they want more
than one file for the teams, for lots of calls.
288. We’re like, okay, we can deliver reports every week.
289. What’s happening within your team?
290. And yeah, that’s kind of how you scale this vertically.
291. You’re starting with one small thing, summarization.
292. Then you add the channel, emails and then like, okay, we see that could
be slightly bigger in some smaller but still scale.
293. So that’s the whole idea here, makes sense.
294. But still it’s very small experiments.
295. And what we also learned, why I’m not that bullish on just consumer.
296. Consumer.
297. Because I’m thinking that if we go one kind of API first for the developers
who need summarization, who want to build AI agents, for example, you
build an AI agent, you’re building any device that records your context,

17
whatever.
298. You need the backbone for all this and for your matchpacking purposes,
for boardy, for whatever, you still need the same thing.
299. So we’re seeing that first before we even go into the marketplace mode.
300. We can just deliver the API for the context.
301. Makes sense.
302. Somebody has it today.
303. I would say first we’re building the backbone for those applications and
then yeah, if we want to try the applications ourselves, build the examples,
for example, or even put something open source like the core tech.
304. Right now we build a really great prompter.
305. We do the prompts on scale right now to actually be able to process context
on scale, we just put it open source, you know, later.
306. Yeah, we can just build like templated version of AI growth and for exam-
ple, put it on GitHub, you know, and scale it this way.
307. Why not?
308. Yeah, yeah, yeah.
309. So building some experiments that could be run on your.
310. Yeah.
311. On your foundation.
312. Yeah, yeah, that makes sense.
313. I mean open sourcing some, some little bits and pieces I think is certainly
a great, a great way to, to grow your platform.
314. Just make more people try it, see it and whatnot.
315. I liked what folks, for example, over at browser base have been doing as
well since.
316. I love them.
317. I just really love those guys.
318. Yeah, those guys and the mainframe.
319. Mainframe is insane.
320. Telling the team, you know, this sucks.
321. I don’t know Mainframe, I need to check them out.
322. They put the Open source Lemma, lemma 2 or lemma 3, even app, like
just the open source version of the app for App Store, maybe Android to
GitHub, so you can just run it on your device and locally do the Q and A
with Lemma, like the latest version of Lamo.
323. Interesting.
324. That’s incredible.
325. I’ll check them out.
326. It’s like, one of the really the only one really working, like, local Q and A
models that could be run on, like, any cheap devices, you know, and they
put it all for free, so.
327. And they have, like, all this fancy branding, like, full suite, you know, like
Vacancy Inc.
328. Like, all those folks.
329. So I really love that direction, you know, like, vintage tech in a way.

18
330. Interesting.
331. Interesting.
332. Yeah.
333. Awesome.

19

You might also like