I wrote a bit of BASIC on my Spectrum but there was a reason they had keyword shortcuts on that keyboard. It wasn’t until I got my Dragon 32 which had I proper keyboard that I really got into coding.
FLOSS virtualization hacker, occasional brewer
I wrote a bit of BASIC on my Spectrum but there was a reason they had keyword shortcuts on that keyboard. It wasn’t until I got my Dragon 32 which had I proper keyboard that I really got into coding.
My dad failed his 11+ so was sent to a technical school so he actually learnt how to lay a row of bricks or how to beat out lead flashing. He did end up doing a PhD in Physics but I suspect his early school years explain why he’s always been much more practical than me. My wife was a stage tech during uni so I’ll happily defer to her for joinery. I can just about solder a copper pipe or big pads on a PCB.
Most people are just like us, trying to make their way through the world. The number of people who are actually actively assholes is fairly small but they tend to have an outsized effect on our day when we encounter them.
Self hosting takes time and energy and most open source developers join projects because they are interested in the project not becoming admins. On top of that building a CI system is an expensive undertaking when a lot of hosting solutions provide a fair amount of compute for free to qualifying projects.
I tried all sorts of port forwarding tricks to get wireguard working on the VM that runs my HA instance to no avail. The trailscale solution works really well. The only real problem I had was magic DNS conflicts with DNS66 on my phone (which I use for ad blocking). In the end I just used a hardwired VPN IP for my HA connection.
Alcohol isn’t that great as an organic solvent. Are you using the air fryer to evaporate? That must be a fair fire risk!
Butane on the other hand is a good organic solvent and will evaporate at room temperature (just don’t evaporate it in a room or near any heat source).
Random racists is just background noise these days. I was comparing media coverage and comments from panelists on things like question time. It was certainly an area of comment for Blair and less so for Sunak from my recollection.
I guess I didn’t notice in the coverage I watched. Was it the daily mail or just the dreges of the internet?
There was (manufactured?) outrage when Tony Blair converted after his premiership. I don’t think the topic of the current UK prime minister’s religion even came up when he was appointed. I guess that’s progress.
Liquid gas column extraction of organic compounds? I’m told that’s something you should definitely do outside!
Unpacked goods tend to have a lower shelf life so can lead to more wastage. It needs a holistic analysis from farm to table to work out the best trade offs for reducing waste.
The ISA may be open but I’m pretty sure the microarchitecture will be totally proprietary. Even with a kick ass microarchitecture they may still struggle if they can’t use the latest process nodes to actually manufacture the chips.
Having said that I suspect the main challenge RISCV is going to face is the software ecosystem. That stuff can take a decade to build and requires a degrees of cooperation between all the companies building chips.
That’s just an a architectural description, any non toy implementation is still propriety. That’s without solving the layout and tapeout for whatever highly propriety process node you plan to build on.
This is essentially what org-mode files are, plain text files with a bit of markup so they can be organised or rendered to other formats.
This seems like an excellent idea and hopefully provides a model for other media outlets to follow.
Have any actually passed yet? Sure LLMs can generate a lot of plausible text now better than previous generations of bots, but they still tend to give themselves away with their style of answering and random hallucinations.
Do you usually have some other front-end over the model? I can run llama.cpp directly in interactive mode but the results are a little underwhelming. However there seem to be various front ends that get better results? Is this down to better prompting and parameter control? I’ve seen temperature mentioned in relation to ChatGPT but I have no idea what rope and yarn factors are for?
Is there a standard for the suffixes? For example the OpenLlama models here: https://huggingface.co/SlyEcho/open_llama_7b_v2_gguf/tree/main have qN and and then a mix of K, M, 0 and 1 suffixes. The q I assume is the quantisation level but measured how? Does q2 mean t 2bits per weight? That seems very small - and what is it fixed float, integers?
Where is the sweet spot for running CPU bound models? I’ve just started playing with llama.cpp but the big models do make the cores work pretty hard. Should I look at using quantisation or more fine tuned models for the tasks I care about (developer assistance mainly).
Why do the $20 subscription when the API pricing is much cheaper, especially if you are trying different models out. I’m currently playing about with Gemini and that’s free (albeit rate limited).