This would be an interesting additional layer for google maps search which I often find to be lacking. For example, I was recently travelling in Gran Canaria and looking for places selling artesan coffee in the south (spoiler: only one in a hotel which took me almost half an hour to even find). Searching for things like "pourover" and "v60" is usually my go-to signal but unless the cafe mentions this in their description or its mentioned in reviews it's hard to find. I don't think they even index the text on the photos customers take (which will often include the coffee menu behind the cashier).
Yeah, that can be somewhat of a problem in bigger cities ;-) It's pretty common for people to have taken a photo of the menu in cafes but as mentioned it seems google isn't ingesting or surfacing that information for text search.
GitHub of the person who prepared the data. I am curious how much compute was needed for NY. I would love to do it for my metro but I suspect it is way beyond my budget.
(The commenters below are right. It is the Maps API, not compute, that I should worry about. Using the free tier, it would have taken the author years to download all tiles. I wish I had their budget!)
The linked article mentions that they ingested 8 million panos - even if they're scraping the dynamic viewer that's $30k just in street view API fees (the static image API would probably be at least double that due to the low per-call resolution).
OCR I'd expect to be comparatively cheap, if you weren't in a hurry - a consumer GPU running PaddlePaddle server can do about 4 MP per second. If you spent a few grand on hardware that might work out to 3-6 months of processing, depending on the resolution per pano and size of your model.
Their write up (linked at top of page below main link, and in a comment) says:
> "media artist Yufeng Zhao fed millions of publicly-available panoramas from Google Street View into a computer program that transcribes text within the images (anyone can access these Street View images; you don’t even need a Google account!)."
Maybe they used multiple IPs / devices and didn't want to mention doing something technically naughty to get around Google's free limits, or maybe they somehow didn't hit a limit doing it as a single user? Either way, it doesn't sound like they had to pay if they only mention not needing an account.
(Or maybe they just thought people didn't need to know that they had to pay, and that readers would just want the free access to look up a few images, rather than a whole city's worth?)
i just hashout out the details with claude. apparently it would cost me ~8k USD to retrieve all Taipei street images from gmap api with 3m density. Expensive, but not impossible.
The pudding.cool article has a link labeled "View the map of “F*ck”" but it leads to a search for "fuck" instead. If you search for "F*ck", you find gems such as "CONTRACTOR F CK-UP" https://www.alltext.nyc/panorama/KhzY08H72wV2ldXamZU5HA?o=76... (Strategically placed pole obscuring the word.)
"Fart bird special" is pretty funny, and "staff farting only" might be my favorite. Other good ones: "BECAUSE THE FART NEEDS," "Juice Fart," "WHOLESALE FARTS"
Reminds me of NY Cerebro, semantic search across New York City's hundreds of public street cameras: https://nycerebro.vercel.app/ (e.g. search for "scaffolding")
Surprisingly I can't seem to find any doors with notices from the sheriffs department or building department embarrassingly plastered on them. Am I misremembering how these are phrased verbatim or are certain things censored?
I feel like street-view data is surprisingly underused for geospatial intelligence.
With current-gen multimodal LLMs, you could very easily query and plot things like "broken windows," "houses with front-yard fences," "double-parked cars," "faded lane markers," etc. that are difficult to generally derive from other sources.
For any reasonably-sized area, I'd guess the largest bottleneck is actually the Maps API cost vs the LLM inference. And ideally we'd have better GIS products for doing this sort of analysis smoothly.
Yes. I work at a company that is using street view to identify high-rise apartments with dangerous cladding for the UK gov. Also could use it for grouping nearby properties which were clearly built together and share features. Helps spread known information about buildings. You can also get the models to predict age and sometimes even things like double-glazing.
I made this - https://london publicinsights.uk as well as operate a public records aggregator that has indexed, amongst other things, planning applications. I wonder if it could be of use?
This is a super cool project. But it would be 10x cooler if they had generated CLIP or some other embeddings for the images, so you could search for text but also do semantic vector search like "people fighting", "cats and dogs, "red tesla", "clown", "child playing with dog", etc.
The next step should be to create a Street-View-style website for navigating around New York City, where only the text is visible and everything else is left blank/white.
BNE is an anonymous graffiti artist known for stickers that read "BNE" or "BNE was here". The artist has left their mark in countries throughout the world, including the United States, Canada, Asia, Romania, Australia, Europe, and South America. "His accent and knowledge of local artists suggest he is from New York."
A game: find an English word with the fewest hits. (It must have at least one hit that is not an OCR error, but such errors do still count towards your score. Only spend a couple of minutes.) My best is "scintillating" : 3.
Sloth returned surprisingly many results, 92
Deviant returned 5 (cmon NY, do better)
Sherpa five but two false positives, two Gap ads about Sherpa fleece, two genuine including Sherpa consulting which seems pretty niche
Defenestrate got zero
At first glance, there's plenty of grog to be had in NYC. But sailors will be disappointed. It all seems to be OCR errors of "Groceries" or the "Google" watermarks.
I was trying for various graffiti slogans, turns out the anarchy "(A)" is basically the most difficult thing in the world to search for lol, other political ideologies much easier to find. It did amusingly lead me to search for just "anarchy" which led to 4 pages of bus ads for a show by the "Sons of Anarchy" guy.
EDIT: Lol, "communism" leads to 39 pages of Shen Yun billboards.
The word search for "fart" shows the tool's limits. No entry I saw actually said the word fart, but was listed as doing so -- "fart nawor" (hearts around the world irl), the penny farting (the penny farthing irl), etc.
Under the search button there is a drop down. Enable "exact match" and filter low ocr confidence. Still has many false positives, but you'll also see the "fart king".
Mamdani is just one dude's gynecology clinic. I wonder when the data was pulled?
edit: I found mentions of Gaza bombings and there's cars with like #gaza on it so my guess is sometime in the last 2 years.
I could of course look it up but this is a game now for me, like when I found a hella old atlas in a library and tried to figure out the date it was published just by looking at the maps.
Gosh! Maybe one of these days someone will take time off from this cultural wonderment to construct a simple, easy to use, text-to-audio.file program - you know, install, paste in some text, convert, start-up a player - so that the blind can listen to texts that aren't recorded in audiobooks. Without a CS degree.
I think the issue is the compute power needed for good voice models is far from free just in hardware and electricity, so any good text to audio solution likely needs to cost some money. Wiring up Google vertex AI text to speech or the aws equivalent is probably something chat gpt could walk most people through even without a CS degree, a simple python script you could authenticate from a terminal command, and would maybe cost a couple bucks for personal usage
A service you can pay for of that simplicity probably doesn’t exist because there are other tools that integrate better with how the blind interact with computers, I doubt it’s copy and pasting text, and those tools are likely more robust albeit expensive
This write-up about the site is also fascinating: https://pudding.cool/2025/07/street-view/
The Pudding is one of the best things on the internet today.
Added to top text. Thanks!
This would be an interesting additional layer for google maps search which I often find to be lacking. For example, I was recently travelling in Gran Canaria and looking for places selling artesan coffee in the south (spoiler: only one in a hotel which took me almost half an hour to even find). Searching for things like "pourover" and "v60" is usually my go-to signal but unless the cafe mentions this in their description or its mentioned in reviews it's hard to find. I don't think they even index the text on the photos customers take (which will often include the coffee menu behind the cashier).
Seems like searching for V60 would get you a lot of Volvos! Is anyone photographing these words in coffee shops that would let them be surfaced here?
Yeah, that can be somewhat of a problem in bigger cities ;-) It's pretty common for people to have taken a photo of the menu in cafes but as mentioned it seems google isn't ingesting or surfacing that information for text search.
It could be. If they didn't think about it, now they can.
Could easily seeing myself come back to this.
│
└── Dey well; Be well
GitHub of the person who prepared the data. I am curious how much compute was needed for NY. I would love to do it for my metro but I suspect it is way beyond my budget.
https://github.com/yz3440
(The commenters below are right. It is the Maps API, not compute, that I should worry about. Using the free tier, it would have taken the author years to download all tiles. I wish I had their budget!)
I would wager the compute for the OCR is cheap. Just get a beefy local desktop PC, if it runs overnight or even takes a week that's fine.
It's the Google Maps API costs that will sink your project if you can't get them waived as art:
https://mapsplatform.google.com/pricing/
Not sure how many panoramas there are in New York or your metro, but if it's over the free tier you're talking thousands of dollars.
The linked article mentions that they ingested 8 million panos - even if they're scraping the dynamic viewer that's $30k just in street view API fees (the static image API would probably be at least double that due to the low per-call resolution).
OCR I'd expect to be comparatively cheap, if you weren't in a hurry - a consumer GPU running PaddlePaddle server can do about 4 MP per second. If you spent a few grand on hardware that might work out to 3-6 months of processing, depending on the resolution per pano and size of your model.
Their write up (linked at top of page below main link, and in a comment) says:
> "media artist Yufeng Zhao fed millions of publicly-available panoramas from Google Street View into a computer program that transcribes text within the images (anyone can access these Street View images; you don’t even need a Google account!)."
Maybe they used multiple IPs / devices and didn't want to mention doing something technically naughty to get around Google's free limits, or maybe they somehow didn't hit a limit doing it as a single user? Either way, it doesn't sound like they had to pay if they only mention not needing an account.
(Or maybe they just thought people didn't need to know that they had to pay, and that readers would just want the free access to look up a few images, rather than a whole city's worth?)
Any possibility this is user-submitted panoramas, and maybe they don't charge for those?
It says 8 million images. So, 13.2 images/second for one week.
I'm wondering about more the data - did they use Google's API or work with Google to use the data?
i just hashout out the details with claude. apparently it would cost me ~8k USD to retrieve all Taipei street images from gmap api with 3m density. Expensive, but not impossible.
Interesting how they censor the word "fuck" like it's going to affect your brain if you read it fully spelled or something
Is it? I can lookup that word and see it in the pictures. Or is it the StreetView version that has been censored somewhere?
The pudding.cool article has a link labeled "View the map of “F*ck”" but it leads to a search for "fuck" instead. If you search for "F*ck", you find gems such as "CONTRACTOR F CK-UP" https://www.alltext.nyc/panorama/KhzY08H72wV2ldXamZU5HA?o=76... (Strategically placed pole obscuring the word.)
SEO, or family friendly values (maybe both!). Related: no swearing in the first minute of YouTube videos.
That's been changed (again). Iirc most swear words are now fine wherever they are in the vid.
Is that a youtube policy? It's so weird.
[dead]
Is there an API? I'd love to make a music video like the one in https://pudding.cool/2025/07/street-view/
Searching "Fool" gives a lot of OCR errors, some of which are due to occlusions: https://www.alltext.nyc/search?q=fool&p=3
"Surgery of the Fool" is my personal favorite.
Same with "fart," and it's an absolute delight: https://www.alltext.nyc/search?q=fart
"Fart bird special" is pretty funny, and "staff farting only" might be my favorite. Other good ones: "BECAUSE THE FART NEEDS," "Juice Fart," "WHOLESALE FARTS"
This must be great for OSINT. I wonder if intelligence agencies already have something like this for the whole world.
this is why i love HN.. dang it even found my childhood bagels store in Queens! https://www.alltext.nyc/search?q=bagels+jackson+heights <heart>
Reminds me of NY Cerebro, semantic search across New York City's hundreds of public street cameras: https://nycerebro.vercel.app/ (e.g. search for "scaffolding")
What is surprising to me is how low res the public street camera are. Combine that with the glare of car headlights ... :(
Ah yeah, this was the winning project at an NVIDIA and Vercel hackathon awhile back
Related. Others?
All Text in NYC - https://news.ycombinator.com/item?id=42367029 - Dec 2024 (4 comments)
All text in Brooklyn - https://news.ycombinator.com/item?id=41344245 - Aug 2024 (50 comments)
I have a London one also if anyone is interested!
https://london.publicinsights.uk
All the dotcoms in NYC: https://www.alltext.nyc/search?q=.com&sm=e
Surprisingly I can't seem to find any doors with notices from the sheriffs department or building department embarrassingly plastered on them. Am I misremembering how these are phrased verbatim or are certain things censored?
I feel like street-view data is surprisingly underused for geospatial intelligence.
With current-gen multimodal LLMs, you could very easily query and plot things like "broken windows," "houses with front-yard fences," "double-parked cars," "faded lane markers," etc. that are difficult to generally derive from other sources.
For any reasonably-sized area, I'd guess the largest bottleneck is actually the Maps API cost vs the LLM inference. And ideally we'd have better GIS products for doing this sort of analysis smoothly.
Yes. I work at a company that is using street view to identify high-rise apartments with dangerous cladding for the UK gov. Also could use it for grouping nearby properties which were clearly built together and share features. Helps spread known information about buildings. You can also get the models to predict age and sometimes even things like double-glazing.
I made this - https://london publicinsights.uk as well as operate a public records aggregator that has indexed, amongst other things, planning applications. I wonder if it could be of use?
This would tremendously help in making of a "Lavish" music video: https://youtu.be/flYgpeWsC2E
Found the classic EE UNSH (Embee Sunshade Co) which used to be EM EE UNSH (at least in a photo of mine taken 18 years ago) https://www.alltext.nyc/panorama/SSQGgn90zcClm6MdOlDOsA?o=31...
This would probably make John Wilson's job a lot easier (https://en.wikipedia.org/wiki/How_To_with_John_Wilson)
This is a super cool project. But it would be 10x cooler if they had generated CLIP or some other embeddings for the images, so you could search for text but also do semantic vector search like "people fighting", "cats and dogs, "red tesla", "clown", "child playing with dog", etc.
First search for SAMO!
https://en.wikipedia.org/wiki/SAMO
But difficult to figure out if any of them are original.
I liked this one, but it is most likely newer. It is on top of the City-as-school building where Basquiat attended, so it is probably a tribute.
https://www.alltext.nyc/panorama/DZz7Gp1PtROe78ailUpvlA?o=11...
The first search I did was IRAK, the second, FAILE. Ghosts of graf.
hah, it can find all the KEST GAK stickers now: https://www.alltext.nyc/search?q=kest
https://www.alltext.nyc/search?q=ana+peru
Can’t find me any REVS tags. https://en.m.wikipedia.org/wiki/Revs_(graffiti_artist)
Instead shows me thousands of “Rev“
You need to toogle "exact match". Then you will find a some, mostly together with COST. It's not a lot, though, probably a sign of a bygone era.
37,975 bagels in nyc! *w/ some dupes https://www.alltext.nyc/search?q=bagels
There's a lot of PIZZA in New York City!
> There's a lot of PIZZA in New York City!
New York is consistently rated alongside Naples as having the best pizza in the world.
The creator gave a talk that has more details on how it was done: https://www.youtube.com/watch?v=gfODe92DzLU
IIRC he found a way to download streetview images without paying, and used the OCR built-in to macOS (which is really good).
TIL : Shortcuts.app has an "Extract Text from Image" action.
[dead]
The next step should be to create a Street-View-style website for navigating around New York City, where only the text is visible and everything else is left blank/white.
“Sex” -> https://www.alltext.nyc/panorama/-FQLvskTncufoBXtcfi0aA?o=66...
Finally, this guy’s OCR-friendly long game pays off! https://www.alltext.nyc/search?q=BNE
what's BNE?
https://en.m.wikipedia.org/wiki/BNE_(artist)
BNE is an anonymous graffiti artist known for stickers that read "BNE" or "BNE was here". The artist has left their mark in countries throughout the world, including the United States, Canada, Asia, Romania, Australia, Europe, and South America. "His accent and knowledge of local artists suggest he is from New York."
This is exceedingly fun.
A game: find an English word with the fewest hits. (It must have at least one hit that is not an OCR error, but such errors do still count towards your score. Only spend a couple of minutes.) My best is "scintillating" : 3.
First lucky try, “calisthenics” scores a verified 1. It would be interesting if there was a Parquet file of the raw data.
https://www.alltext.nyc/search?q=Calisthenics
“perplexed” gets one hit. It appears in a Bible quotation on an abortion rights poster on West 77th Street. Someone is sleeping beneath the poster:
https://www.alltext.nyc/search?q=perplexed
One match: https://www.alltext.nyc/search?q=Buxom
Sloth returned surprisingly many results, 92 Deviant returned 5 (cmon NY, do better) Sherpa five but two false positives, two Gap ads about Sherpa fleece, two genuine including Sherpa consulting which seems pretty niche Defenestrate got zero
I found "intertwining" with a score of 3 also. Two instances of the word on the same sign and then a false positive third pic.
At first glance, there's plenty of grog to be had in NYC. But sailors will be disappointed. It all seems to be OCR errors of "Groceries" or the "Google" watermarks.
There's even "West Indian Grog" to be found! Surely, that must be the rum based drink? https://www.alltext.nyc/panorama/63DKJbMdVQY8ah6jgobU9g?o=85...
Reporting a bug : 4123262 matches for Google.
My explorations "obey", "injured?", "fuck trump", "fuck obama"
I was trying for various graffiti slogans, turns out the anarchy "(A)" is basically the most difficult thing in the world to search for lol, other political ideologies much easier to find. It did amusingly lead me to search for just "anarchy" which led to 4 pages of bus ads for a show by the "Sons of Anarchy" guy.
EDIT: Lol, "communism" leads to 39 pages of Shen Yun billboards.
The word search for "fart" shows the tool's limits. No entry I saw actually said the word fart, but was listed as doing so -- "fart nawor" (hearts around the world irl), the penny farting (the penny farthing irl), etc.
Under the search button there is a drop down. Enable "exact match" and filter low ocr confidence. Still has many false positives, but you'll also see the "fart king".
I immediately looked up "Blob Dylan"
I searched "norse" , but it didn't give me any good result at all, lots of hallucinations when you check the sources it found.
I _love_ this but it's pretty bad. I searched for "Morgue" and one of the matches was the "2025 Google" watermark which it thought was "Big Morgue"
Again, a complex problem and I love it...
Some entertaining misreads:
https://www.alltext.nyc/search?q=Sex
amazing. look up some graffiti writers you know
Search for “fart” if you want a good laugh.
As others have mentioned, the idea is so cool, but the text recognition is abysmal.
Agreed, how in 2025 is an OCR model reading this as "Bobbins"?
https://www.alltext.nyc/panorama/z0SOvmU-5_yuspnsFvjVuA?o=16...
It worked perfectly on the two tests I tried: the GSA building in SoHo, and BKLYN Blend in Bedstuy.
Searching for “foo” is humorous, it’s mostly restaurants with signs that say “food” but the “d” is cropped.
I typed in "fart" and none of the results on the first page were actually the word "fart".
I also did this. But I wasn’t mad, I was amused.
[dead]
520 matches on "hotdog" 8084 matches on "massage" in no particular order
This is pretty cool! I'm curious what was used for OCR? Amazon Mechanical Burp?
I could spend hours sending nonsensical queries to this site (but probably shouldn't).
Enviable idea.
https://www.alltext.nyc/search?q=this+is+not
“Andrew Yang” “Mamdani” “Eric Adams”
Mamdani is just one dude's gynecology clinic. I wonder when the data was pulled?
edit: I found mentions of Gaza bombings and there's cars with like #gaza on it so my guess is sometime in the last 2 years.
I could of course look it up but this is a game now for me, like when I found a hella old atlas in a library and tried to figure out the date it was published just by looking at the maps.
Hope he gets to enjoy the freedom of soccer balls hitting the wall outside his flat 16/7.
I like it. I am hoping there is a similar one for Austin, TX
I’d love to see a mash up of this and the historical street view archive from the city archives.
Pretty cool
Cool concept, but the accuracy seems quite low. The hits for "pedo" are pretty hilarious, though! https://www.alltext.nyc/search?q=pedo&p=2
When you search 'google'... you'll see... lol
PERU ANA
"$1 Pizza"
[dead]
Gosh! Maybe one of these days someone will take time off from this cultural wonderment to construct a simple, easy to use, text-to-audio.file program - you know, install, paste in some text, convert, start-up a player - so that the blind can listen to texts that aren't recorded in audiobooks. Without a CS degree.
I think the issue is the compute power needed for good voice models is far from free just in hardware and electricity, so any good text to audio solution likely needs to cost some money. Wiring up Google vertex AI text to speech or the aws equivalent is probably something chat gpt could walk most people through even without a CS degree, a simple python script you could authenticate from a terminal command, and would maybe cost a couple bucks for personal usage
A service you can pay for of that simplicity probably doesn’t exist because there are other tools that integrate better with how the blind interact with computers, I doubt it’s copy and pasting text, and those tools are likely more robust albeit expensive