On Discord, the black hole for useful information.
On Discord, the black hole for useful information.
Cohere’s command-r models are trained for exactly this type of task. The real struggle is finding a way to feed relevant sources into the model. There are plenty of projects that have attempted it but few can do more than pulling the first few search results.
There should be no difference because the video track hasn’t been touched. Some software will display the length of the longest track rather than the length of the main video track. It’s likely that the the audio track was originally longer than the video track and because of the offset it’s now shorter.
You can use tools like ffmpeg and mediainfo to count the actual frames in each to verify.
Koboldcpp should allow you to run much larger models with a little bit of ram offloading. There’s a fork that supports rocm for AMD cards: https://github.com/YellowRoseCx/koboldcpp-rocm
Make sure to use quantized models for the best performace, q4k_M being the standard.
I only have 60 down and 12 up so I cap about 80% of the time with a short uncapped window late at night.
Tun0 is the interface that most vpns are using so I assume proton is the same.
French laws don’t recognize software patents so videolan doesn’t either. This is likely a reference to vlc supporting h265 playback without verifying a license. These days most opensource software pretends that the h265 patents and licensing fees don’t exist for convenience. I believe libavcodec is distributed with support enabled by default.
Nearly every device with hardware accelerated h265 support has already had the license paid for, so there’s not much point in enforcing it. Only large companies like Microsoft and Red Hat bother.
That isn’t neccesarily true, though for now there’s no way to tell since they’ve yet to release their code. If the timeline is anything like their last paper it will be out around a month after publication, which will be Nov 20th.
There have been similar papers for confusing image classification models, not sure how successful they’ve been IRL.
Get yt-dlp then run: yt-dlp -x ‘video-url’
I believe if you’re willing to check the format codes on the video you can download audio only but both will get you the least compressed audio available.
I’ve used the tplink ones that they’re using and they’ve been pretty solid. I can’t say how they’d fare in a 24/7 setup though since they’re not really intended for that.
Middle mouse? What’s that?
I use okular as my primary image viewer as well. I love the middle mouse drag to zoom.
The big issue for me is that there is any disadvantage between generations. My current 5 year old flagship has a headphone jack, expandable storage, and support for Bluetooth 5.0 which is all that most devices need. The only new phones that still have all 3 are cheap budget phones that lack in other areas compared to the one I already have.
LLMs only predict the next token. Sometimes those predictions are correct, sometimes they’re incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don’t make sense.
Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.
A front for tax evasion?
I’d guess the 3 key staff members leaving all at once without notice had something to do with it.