When I was in kindergarten there were tasks where you'd get a paragraph of text where you had to fill in blanks, and next to those blanks were pictograms of whatever noun was expected.
Whenever I see people overuse emojis, for example "Yesterday we flew [plane emoji] to Japan [Flag of Japan] and took the train [bullet train emoji] and saw Mt. Fuji [emoji of Mt. Fuji]", I always think, "This person is still in kindergarten."...
it sounds like a person who does not know their emojis: Japan is the only country to have its geography available as an emoji, so why wouldn't you use it in this case instead of the flag?
You do, but others don't, and I can't imagine anyone getting confused by a phrase like "I went on a [plane emoji] yesterday!" so it's not a barrier to communication. So where's the issue?
Hey HN! I've been playing around with Apple's CoreML framework for some personal projects, and wanted to see how it might work in a CLI context. This is really just something fun I did over the weekend for sh*ts and giggles. I hope you enjoy!
This made me grin and I love that it did. Sometimes our profession can be a little short on whimsy and I think projects like this are actually really important! I’m looking forward to using this :)
It may have taken 56 years, but this comprehensively resolves the question about whether shell is better than GUI in favor of shell. Thanks to the endless composability of shell, just this one program finally fixes the biggest flaws in text file handling in Unix, but the GUI equivalent would be quite complicated and composes poorly. This vindicates all of the original developers of UNIX once and for all.
That's not fixing the biggest flaw, it's just addressing a small set of specific use cases.
If anything, it's a bandaid that highlights the biggest flaw of UNIX philosophy: everything is passed as unstructured text. Because of it, half of shell programming is just piecing together ad-hoc, buggy parsers that interpret the input, possibly rearrange it, and then dump it down the pipeline as unstructured text, so the next step can do it all over again. And then, of course, every CLI program has to do that too.
This program is using a machine learning model to parse its input; while this may be the only reasonable way to go about guessing emojis from arbitrary text, I can easily see people doing the same to parse outputs of Linux CLI tools at runtime, because it looks much more pleasant and might be even more reliable than writing input parsers by hand. Let's pause here to consider the absurdity of that situation.
I don't use PowerShell often at all, but this IMO is one thing it gets right, by piping .NET objects and implicitly piping to `Out-Default` at the end.
Of course this is another standard that tools would have to be incredibly careful to keep track of, but JSON is decently mature and a bunch of modern tools can operate in it, so at least there's slow progress away from the plaintext quagmire.
I feel you’ve gone on a tangent there. Particularly your unrelated comment about using of using ML because people are too lazy to write parsers.
That all said, I don’t really disagree with any of your individual points. But teemoji is clearly not meant to be considered a serious tool so I wouldn’t be too critical of UNIX that this tool exists.
I'm not critical of UNIX for this tool specifically, and I think it would be useful even if UNIX did things differently - a "pick best fitting emoji for arbitrary line of text" is a well-defined task, and I hit scenarios where I'd wish I had such tool surprisingly often.
tee does something very specific, it makes an unmodified copy into a file (one branch of the plumbing tee-joint) as it passes the stream to stdout (the other branch); as opposed to sed or awk or even grep, et al, which modify the stream. How in hell is this inspired by tee which does not modify its inputs?
This is so cute and funny and interesting. Thanks for making and sharing.
I was doing `echo cat | teemoji` tests and it would work, but ironically 'echo happy face | teemoji` and the like didn't work so well for many other obvious single-word emojis. But it did a "checkered flag" for "I got the job done".
Glad you enjoy it! There are some inconsistencies/surprises for what emoji gets suggested my the model. I’m hoping to generate some better training data and refine the outputs in the next release.
I wish I could get this on Linux </3 Any ideas how to get there (without reaching out to network apis, like openai)? I assume since it's built on Apple's CoreML framework it's not possible?
I feel like this defeat the goal of emojis and icons: highlight important informations for our brain to process. This is just an overwhelming amount of emojis for me, but I guess it has its usecases
The following is a "Get off my lawn!" comment:
When I was in kindergarten there were tasks where you'd get a paragraph of text where you had to fill in blanks, and next to those blanks were pictograms of whatever noun was expected.
Whenever I see people overuse emojis, for example "Yesterday we flew [plane emoji] to Japan [Flag of Japan] and took the train [bullet train emoji] and saw Mt. Fuji [emoji of Mt. Fuji]", I always think, "This person is still in kindergarten."...
You mean, like the fastapi documentation used to be?
https://github.com/fastapi/fastapi/issues/3273
it sounds like a person who does not know their emojis: Japan is the only country to have its geography available as an emoji, so why wouldn't you use it in this case instead of the flag?
https://emojipedia.org/map-of-japan
Because when writing you use proper nouns.
You do, but others don't, and I can't imagine anyone getting confused by a phrase like "I went on a [plane emoji] yesterday!" so it's not a barrier to communication. So where's the issue?
Because its harder to read. The example another comment linked to is a good demonstration https://github.com/fastapi/fastapi/issues/3273
Hey HN! I've been playing around with Apple's CoreML framework for some personal projects, and wanted to see how it might work in a CLI context. This is really just something fun I did over the weekend for sh*ts and giggles. I hope you enjoy!
This is lovely and brings some whimsy to the terminal!
Please consider making it available on MacPorts (for those who don’t use Homebrew). Anyone else here who can bring this to MacPorts?
I’ll bring it over to MacPorts soon unless someone beats me to it
This made me grin and I love that it did. Sometimes our profession can be a little short on whimsy and I think projects like this are actually really important! I’m looking forward to using this :)
It may have taken 56 years, but this comprehensively resolves the question about whether shell is better than GUI in favor of shell. Thanks to the endless composability of shell, just this one program finally fixes the biggest flaws in text file handling in Unix, but the GUI equivalent would be quite complicated and composes poorly. This vindicates all of the original developers of UNIX once and for all.
That's not fixing the biggest flaw, it's just addressing a small set of specific use cases.
If anything, it's a bandaid that highlights the biggest flaw of UNIX philosophy: everything is passed as unstructured text. Because of it, half of shell programming is just piecing together ad-hoc, buggy parsers that interpret the input, possibly rearrange it, and then dump it down the pipeline as unstructured text, so the next step can do it all over again. And then, of course, every CLI program has to do that too.
This program is using a machine learning model to parse its input; while this may be the only reasonable way to go about guessing emojis from arbitrary text, I can easily see people doing the same to parse outputs of Linux CLI tools at runtime, because it looks much more pleasant and might be even more reliable than writing input parsers by hand. Let's pause here to consider the absurdity of that situation.
I don't use PowerShell often at all, but this IMO is one thing it gets right, by piping .NET objects and implicitly piping to `Out-Default` at the end.
Of course this is another standard that tools would have to be incredibly careful to keep track of, but JSON is decently mature and a bunch of modern tools can operate in it, so at least there's slow progress away from the plaintext quagmire.
I feel you’ve gone on a tangent there. Particularly your unrelated comment about using of using ML because people are too lazy to write parsers.
That all said, I don’t really disagree with any of your individual points. But teemoji is clearly not meant to be considered a serious tool so I wouldn’t be too critical of UNIX that this tool exists.
I'm not critical of UNIX for this tool specifically, and I think it would be useful even if UNIX did things differently - a "pick best fitting emoji for arbitrary line of text" is a well-defined task, and I hit scenarios where I'd wish I had such tool surprisingly often.
An AI-powered dev tool? I expect to see this in the next YC batch
/usr/local/bin/teemoji: line 2: /usr/local/Cellar/teemoji/0.0.4/libexec/teemoji: Bad CPU type in executable
:sadface:
intel?
my major and minor complaints are, in order:
tee does something very specific, it makes an unmodified copy into a file (one branch of the plumbing tee-joint) as it passes the stream to stdout (the other branch); as opposed to sed or awk or even grep, et al, which modify the stream. How in hell is this inspired by tee which does not modify its inputs?
and who capitalizes Tee?
Because it writes to a file like tee.
This is so cute and funny and interesting. Thanks for making and sharing.
I was doing `echo cat | teemoji` tests and it would work, but ironically 'echo happy face | teemoji` and the like didn't work so well for many other obvious single-word emojis. But it did a "checkered flag" for "I got the job done".
Glad you enjoy it! There are some inconsistencies/surprises for what emoji gets suggested my the model. I’m hoping to generate some better training data and refine the outputs in the next release.
I wish I could get this on Linux </3 Any ideas how to get there (without reaching out to network apis, like openai)? I assume since it's built on Apple's CoreML framework it's not possible?
The training data is in the resources, so you could use that to train a model locally and invoke it.
I’ll have to look into this; it’s something I’d love to add support for if possible!
I feel like this defeat the goal of emojis and icons: highlight important informations for our brain to process. This is just an overwhelming amount of emojis for me, but I guess it has its usecases
This is really cute! thanks for sharing :)
if I may offer a small nitpick for feedback, I am seeing wrench emojis on empty lines which create a lot of noise.
If you are interested in PRs (and making the change) I might try to take a look at the code and see if it's possible for me to contribute this to it.
Does it produces the infamous emoji if a certain expletive occurs in the input?
By the way, I use that emoji to test whether astral planes are handled correctly.
I need to recover from the cognitive dissonance now...
what
looks like a markov chain lol
Awesome! Pretty impressive that the model only weighs ~200Kb
You saw cat -v considered harmful and thought ... hold my beer!