VLC media player, the popular open source software developed by nonprofit VideoLAN, has topped 6 billion downloads worldwide and teased an AI-powered VLC media player, the open-source video software developed by nonprofit VideoLan, has topped 6 billion downloads.
I know people are gonna freak out about the AI part in this.
But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.
The most important part is that it’s a local LLM model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.
Just an important note, speech to text models aren't LLMs, which are literally "conversational" or "text generation from other text" models. Things like https://github.com/openai/whisper are their own, separate types of models, specifically for transcription.
That being said, I totally agree, accessibility is an objectively good use for "AI"
Not sure if the .com one is supposed to be a more modern frontend for the .org or something but I've found different subtitles on them so it's good to use both.
VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!
Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.
They're awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it's exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I'm my experience "using" it i find it nigh unusable.
Youtube's removal of community captions was the first time I really started to hate youtube's management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven't found a replacement for it (at least, one that actually works)
I've been working on something similar-ish on and off.
There are three (good) solutions involving open-source models that I came across:
KenLM/STT
DeepSpeech
Vosk
Vosk has the best models. But they are large. You can't use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.
What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).
One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either "... still he ..." or "... silly ...". My tool can give you "... (still he|silly) ..." instead of 50/50 chancing it.
In my experiments, local Whisper models I can run locally are comparable to YouTube's — which is to say, not production-quality but certainly better then nothing.
I've also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I'm put off by the mention of cloud services. Depends on how they implement it.
Since VLC runs on just about everything, I'd imagine that the cloud service will be best for the many devices that just don't have the horsepower to run an LLM locally.
There are other good uses of AI. Medicine. Genetics. Research, even into humanities like history.
The problem always was the grifters who insist calling any program more complicated than adding two numbers AI in the first place, trying to shove random technologies into random products just to further their cancerous sales shell game.
The problem is mostly CEOs and salespeople thinking they are software engineers and scientists.
The app Be My Eyes pivoted from crowd sourced assistance to the blind, to using AI and it's just fantastic. AI is truly helping lots of people in certain applications.
I know Jeff Geerling on Youtube uses OpenAIs Whisper to generate captions for his videos instead of relying on Youtube's. Apparently they are much better than Youtube's being nearly flawless. I would have a guess that Google wants to minimize the compute that they use when processing videos to save money.
Spoiler, they will! I use FUTO keyboard on android, it's speech to text uses an ai model and it is amazing how great it works. The model it uses is absolutely tiny compared to what a PC could run so VLC's implementation will likely be even better.
I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.
In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.
I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.
subtitling by hand takes sooooo fucking long :( people who do it really are heroes. i did community subs on youtube when that was a thing and subtitling + timing a 20 minute video took me six or seven hours, even with tools that suggested text and helped align it to sound. your brain instantly notices something is off if the subs are unaligned.
Oh shit, I knew it was tedious but it sounds like I seriously underestimated how long it takes. Good to know, and thanks for all you've done.
Sounds to me like big YouTubers should pay subtitlers, but that's still a small fraction of audio/video content in existence. So yeah, I guess a better wish would be for the tech to improve. Hopefully it's on the right track.
I did this for a couple videos too. It's actually still a thing, it was just so time consuming for no pay that almost nobody did it, so creators don't check the box to allow people to contribute subs
You can use tools like whishper to pre generate the subtitles. You will have pretty accurate su titles at the right times. Then you can edit the errors and maybe adjust the timings.
But I guess this workflow will work with VLC in the future as well
Jup. That should always be paid work. It takes forever. I tried to subtitle the first Always Sunny Episode. I got very nice results. Especially when they talked over another. But to get the perfect timing when one line was about to get hidden and the other appears was tedious af. All in all the 25 minutes cost me about the same number of hours. It's just not feasible.
Iirc this is because of how they've optimized the file reading process; it genuinely might be more work to add efficient frame-by-frame backwards seeking than this AI subtitle feature.
That said, jfc please just add backwards seeking. It is so painful to use VLC for reviewing footage. I don't care how "inefficient" it is, my computer can handle any operation on a 100mb file.
If you have time to read the issue thread about it, it's infuriating. There are multiple viable suggestions that are dismissed because they don't work in certain edge cases where it would be impossible for any method at all to work, and which they could simply fail gracefully for.
I don't mind the idea, but I would be curious where the training data comes from. You can't just train them off of the user's (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn't seem to help.
Still no live audio encoding without CLI (unless you stream to yourself), so no plug and play with Dolby/DTS
Encoding params still max out at 512 kpbs on every codec without CLI.
Can't switch audio backends live (minor inconvenience, tbh)
Creates a barely usable non standard M3A format when saving a playlist.
I think that's about my only complaints for VLC. The default subtitles are solid, especially with multiple text boxes for signs. Playback has been solid for ages. Handles lots of tracks well, and doesn't just wrap ffmpeg so it's very useful for testing or debugging your setup against mplayer or mpv.
I've been waiting for this break-free playback for a long time. Just play Dark Side of the Moon without breaks in between tracks. Surely a single thread could look ahead and see the next track doesn't need any different codecs launched, it's technically identical to the current track, there's no need to have a break.
/rant
my state banned pornhub so I made a big ass stash just in case, so yeah I guess. I also have a stash of music from YouTube in case they ever fully block YT-DLP, so I'm just a general data hoarder.
Fuck no. Leave the subtitles alone. Make people learn something, like searching and applying subtitles files or actually make them write their own and give back, for a change.
I am fortunate to not be deaf (yet) but I have, in fact, writen the subtitles for various titles and submited it to Open Subtitles, both in English and my own native language.