

Yes, that’s a valid distinction. Though practically speaking, I don’t really know how different it is from anthropic sharing with Amazon, for example
Yes, that’s a valid distinction. Though practically speaking, I don’t really know how different it is from anthropic sharing with Amazon, for example
Gee I wonder if grok shares data with the rest of twitter, Gemini shares with the rest of google or llama… actually I haven’t used Facebook in ages, I don’t even know if there’s a ChatGPT equivalent service on Facebook.
I do actually wonder if Anthropic shares data with Amazon, or OpenAI with Microsoft (their majority shareholders). That would be a direct 1:1 comparison with what’s happening between deepseek and bytedance (though at least in the latter case you can host-your-own since the model is open source).
To me synthwave sounds like a lot of trance or progressive trance from the late 90s/early 2000s - Tilt, Paul Oakenfold, Sasha & Digweed, Tiesto, BT
Yes! Slip the sound board guy your discman and $20 and get a perfect recording. I remember a few times where there were a stack of discmans and walkmans (Walkman?) recording.
Ah damnit so I did. The rest of the numbers are true, just not as close to the 1kg’s worth as noted.
A kilo of gold is worth about $193k currently, which depending on where you live and how old you are means different things. For example, if that was your whole net worth and you are a Baby Boomer in the US you’d be about $1.5M below the average family. If you’re under 35, though, you’d be slightly above average. (Via kiplinger)
FWIW because the top 1% have so much wealth they skew the average significantly - overall the median net wealth in the US is right around that $193k number, but the average is just over $1M, which is pretty amazing.
$200K in net wealth would just about put you into the global top 10% and into the top 1% if those were your earnings for the year.
The main findings from the Economic Index’s first paper are:
- Today, usage is concentrated in software development and technical writing tasks. Over one-third of occupations (roughly 36%) see AI use in at least a quarter of their associated tasks, while approximately 4% of occupations use it across three-quarters of their associated tasks.
- AI use leans more toward augmentation (57%), where AI collaborates with and enhances human capabilities, compared to automation (43%), where AI directly performs tasks.
- AI use is more prevalent for tasks associated with mid-to-high wage occupations like computer programmers and data scientists, but is lower for both the lowest- and highest-paid roles. This likely reflects both the limits of current AI capabilities, as well as practical barriers to using the technology.
Interesting, not really surprising, and nowhere near as entertaining as when Pornhub does it’s annual introspection.
The “innovation” in the article is passive tech for fiber to the room (FTTR), specifically made to be low cost and easier to implement. It’s also how your computer might get that 50Gbit - it’ll have to be wired in with a fiber connection. It’s not happening over WiFi (or even Ethernet)
I think “good” and “bad” are hard terms to apply to people objectively, but I do believe that most people value social coherence and are willing to do (the minimum amount of) something to maintain it. If you can’t believe at least that it means that all of those thin blue line people are right, and I’m just not willing to believe that’s true.
That our benevolent alien overlords are gonna show up aaaaaany minute now…
Kinda funny how when mega corps can benefit from the millions upon millions of developer hours that they’re not paying for they’re all for open source. But when the mega corps have to ante up (with massive hardware purchases out of reach of any of said developers) they’re suddenly less excited about sharing their work.
Have you had any luck importing even a medium-sized codebase and doing reasoning on it? All of my experiments start to show subtle errors past 2k tokens, and at 5k tokens the errors become significant. Any attempt to ingest and process a decent-sized project (say 20k SLOC plus tooling/overhead/config) has been useless, even on models that “should” have a good-enough sized context window.