Skip Navigation

User banner
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)C
Posts
3
Comments
126
Joined
2 yr. ago

  • Nice find. There are specific reasons why this patchset won't be merged as-is and I suspect that they're all process issues:

    • Bad memory management from Samsung not developing in the open
    • Proprietary configuration for V4L2 video devices from Samsung not developing with modern V4L2 in mind
    • Lack of V4L2 compliance report from Samsung developing against an internal testbed and not developing with V4L2's preferred process
    • Lack of firmware because Samsung wants to maintain IP rights

    Using generative tooling is a problem, but so is being stuck in 2011. Linux doesn't permit this sort of code dump.

  • House Democrats have dripped more details from Epstein files and we have surprise guests! They released an un-OCR'd PDF; I'll transcribe the mentions of our favorite people:

    Sat[urday] Dec[ember] 6, 2014 ZORRO … Reminder: Elon Musk to island Dec[ember] 6 (is this still happening?)

    Zorro is a ranch in New Mexico that Epstein owned; Epstein was scheduled to be there from December 5-8, so that Musk and Epstein would not be at the island together. Combined with the parenthetical uncertainty about happenstance, did Epstein want to perhaps grant Musk some plausible deniability by not being present?

    Mon[day] Nov[ember] 27, 2017 NY … 12:00pm LUNCH w/ Peter Thiel [REDACTED]

    From the rest of the schedule formatting, the redacted block following Thiel's name is probably not a topic; it might be a name. Lunch between two rich financiers is not especially interesting but lunch between a blackmail-gathering Mossad asset and an influencer-funding accelerationist could be.

    Sat[urday] Feb[ruary] 16, 2019 NY-LSJ 7:00am BREAKFAST w/ Steve Bannon

    Well now, this is the most interesting one to me. This isn't Epstein's only breakfast of the day; at 9 AM he meets with Reid Weingarten, one of his attorneys, about some redacted topic. Bannon's not exactly what I think of as a morning person or somebody who is ready to go at a moment's notice, so what could drag him out of bed so early? (Edit: This vexed me so I looked it up and sunrise was 6:48 AM that morning at sea level. It would have been the crack of dawn!) Epstein's Friday evening had had two haircuts, too, with plenty of redacted info; was he worried about appearing nice for Bannon? (The haircuts might not have been for Epstein, given context.) This was a busy day for Epstein; he had a redacted lunch date, and he also had somebody flying in/out that morning via JFK connecting to Saint Thomas and staying in a hotel room there. He then flew out of Newark in the evening to visit the infamous island itself, Little Saint James. The redaction doesn't quite tell us who this guest is, but it can't be Bannon because the Dems fucked up the redaction! I can see the edges of the descenders on the name, including a 'g' and 'j'/'q', but Bannon's name doesn't have any descenders.

    Also Prince Andrew's in there, I guess?

  • There isn't a way to solve problems without some value judgements. As long as there are Algol descendants and a lineage of C, there will be people with more machismo than awareness of systems, and they will always be patrician and sadistic in their language-design philosophy. Even left-leaning folks like Kelley (Zig) or DeVault (Hare) are not reasonable language designers; they might not be social conservatives but they aren't interested in advancing the art of programming. Zig's explicitly an attempt to iterate on C and C++ without giving up their core unsafety, while Hare is explicitly trying to travel decades back in time to fit onto a 1.41MiB floppy disk.

    I'd recommend stepping outside of the Algol world for a little bit. Hare, Rust, Zig, Go, and Odin have — at least to me, and to a few other PLT folks — the same semantics; they're all built on C++'s memory model and fully inherit its unsafety. (Yes, safe Rust is a safe subset; no, most production Rust is not safe Rust.) Instead, deliberately force yourself to use a Smalltalk, a Forth, a Lisp, an ML, or a Prolog; solve one or two problems in them over a period of about one month per language. This is the only way to understand the computer without the lens of Algol. Also, consider learning a deliberately unpleasant language like Brainfuck or Thue to give yourself an alien toy model to prevent yourself from getting mind-locked over the industry's concerns. If you like reading papers, I'd suggest exactly one paper to cure Algol sickness, the Galois theory of algorithms.

    Discussions on technology are excuses for dick-measuring and insulting people only to later claim that actually you are Dutch and it is in your culture to be an asshole.

    This is your call. Personally I've found that I can be blunt with evidence and technical claims while empathizing with the difficulty of understanding those claims, and this still allows for fruitful technical discussions. (Also, I have the free time to be vindictive, to paraphrase Yet Another Apolitical Programmer.) I've found that GvR (Python, Dutch) doesn't really understand most of the criticisms I've brought to the table, even when I wrote them up for the Python core team, and that the design-by-committee process left multiple Python committee members with a deep contempt for anybody who actually has to use their language. I've also found that "Ginger" Bill (Odin, British) is completely unable to have a discussion on this basis as he is too busy negging, sapping, and otherwise playing rhetorical tricks in order to get his way. Unrelated: I also found that DeVault (American) was willing to be less of a sex pest when threatened with a ban, which is a useful trick for moderators to know; in general, being harsh-but-fair to DeVault seems to have pushed him further and further to leftism and public decency over time.

    Also, sometimes people get removed from their communities! Walter Bright (D, American) was kicked out of the wider D community for generally having shitty politics in all arenas of life; the catalyst was likely some particularly transphobic remarks made a few years ago. Similarly, if Blow's Jai actually had anything interesting to contribute besides the soa and aos keywords then there would already be open-source knockoffs because Blow livestreams so many bigoted takes; arguably Odin is a Jai clone.

  • Other Scott has clarified his position on citational standards in a comment on his blog:

    Wow, that’s really cool; I hadn’t seen [a recent independence result]. Thanks! Given all the recent claims there have been to lower the n for which BB(n) is known to be independent of ZFC, though, I would like to establish a ground rule that a claim needs either a prose writeup explaining what was done or independent verification of its correctness, ideally both but certainly at least one, rather than just someone’s GitHub repo.

    In contrast, the Gauge's standard is that a claim needs reproducible computable artifacts as supporting evidence, with inline comments serving as sufficient documentation for those already well-versed in the topic, and any supporting papers or blog posts are merely a nicety for explaining the topic and construction to the mathematical community and laity at large. If a claim is not sufficiently strong then we should introduce more computational evidence to settle the question at hand.

    For example, Leng 2024 gives a construction in Lean 4. If this is not strong enough then the Gauge could be configured to compile a Nix-friendly Lean 4 and expend some compute in CI to verify the proof, so that the book only builds if Leng's proof is valid. Further critique would focus on what Leng actually proved in terms of their Lean 4 code. Other Scott isn't convinced by this, so it's not part of the story that they will tell.

  • I'm curious whether you or @BlueMonday1984@awful.systems are familiar with the concept of MINASWAN. The only time it's appeared in the discussion is in one of the apologies posted by one of the Ruby Central board members, as their signoff line. Quoting a 2016 analysis of MINASWAN in which it is argued that Ruby's central tenet is not MINASWAN, but wa (和):

    Just for the record, MINASWAN is at least half true. Matz is nice. … I would not call DHH nice. … So if MINASWAN is really a basic truth about the Ruby culture, then how does DHH fit in at all? … MINASWAN is garbage. It'd be more accurate to say, "Ruby showcases the Japanese value of 和, but we are arrogant Americans, so we reduce this to a really basic American idea, harshly compressing it in the process to a state where it cannot possibly mean anything any more, instead of bothering to learn something about the outside world for once." But MINASWAN was already a long acronym, so I guess they had to draw the line at RSTJVO和BWAAASWRTTARBAIHCIITPTASWICPMAAMIOBTLSATOWFO.

    Also, I really think it's worth understanding that Ruby is not at risk here. Ever since the release of RPG Maker XP in 2005, Ruby has been a staple of embedded scripting for game engines. Really, what we're seeing here is the demise of Rails.

  • There's an ACX guest post rehashing the history of Project Xanadu, an important example of historical vaporware that influenced computing primarily through opinions and memes. This particular take is focused on Great Men and isn't really up to the task of humanizing the participants, but they do put a good spotlight on the cults that affected some of those Great Men. They link to a 1995 article in Wired that tells the same story in a better way, including the "six months" joke. The orange site points out a key weakness that neither narrative quite gets around to admitting: Xanadu's micropayment-oriented transclusion-and-royalty system is impossible to correctly implement, due to a mismatch between information theory and copyright; given the ability to copy text, copyright is provably absurd. My choice sneer is to examine a comment from one of the ACX regulars:

    The details lie in the devil, for sure...you'd want the price [of making a change to a document] low enough (zero?) not to incur Trivial Inconvenience penalties for prosocial things like building wikis, yet high enough to make the David Gerards of the world think twice.

    Ah yes, low enough to allow our heroic wiki-builders, wiki-citers, and wiki-correctors; and high enough to forbid their brutish wiki-pedants, wiki-lawyers, and wiki-deleters.

    Disclaimer: I know Miller and Tribble from the capability-theory community. My language Monte is literally a Python-flavored version of Miller's E (WP, esolangs), which is itself a Java-flavored version of Tribble's Joule. I'm in the minority of a community split over the concept of agoric programming, where a program can expand to use additional resources on demand. To me, an agoric program is flexible about the resources allocated to it and designed to dynamically reconfigure itself; to Miller and others, an agoric program is run on a blockchain and uses micropayments to expand. Maybe more pointedly, to me a smart contract is what a vending machine proffers (see How to interpret a vending machine: smart contracts and contract law for more words); to them, a smart contract is how a social network or augmented/virtual reality allows its inhabitants to construct non-primitive objects.

  • Some of our younger readers might not be fully inoculated against high-control language. Fortunately, cult analyst Amanda Montell is on Crash Course this week with a 45min lecture introducing the dynamics of cult linguistics. For example, describing Synanon attack therapy, Youtube comments, doomscrolling, and maybe a familiar watering hole or two:

    You know when people can't stop posting negative or conspiratorial comments, thinking they're calling someone out for some moral infraction, when really they're just aiming for clout and maybe catharsis?

  • It's because of research in the mid-80s leading to Moravec's paradox — sensorimotor stuff takes more neurons than basic maths — and Sharp's 1983 international release of the PC-1401, the first modern pocket computer, along with everybody suddenly learning about Piaget's research with children. By the end of the 80s, AI research had accepted that the difficulty with basic arithmetic tasks must be in learning simple circuitry which expresses those tasks; actually performing the arithmetic is easy, but discovering a working circuit can't be done without some sort of process that reduces intermediate circuits, so the effort must also be recursive in the sense that there are meta-circuits which also express those tasks. This seemed to line up with how children learn arithmetic: a child first learns to add by counting piles, then by abstracting to symbols, then by internalizing addition tables, and finally by specializing some brain structures to intuitively make leaps of addition. But sometimes these steps result in wrong intuition, and so a human-like brain-like computer will also sometimes be wrong about arithmetic too.

    As usual, this is unproblematic when applied to understanding humans or computation, but not a reasonable basis for designing a product. Who would pay for wrong arithmetic when they could pay for a Sharp or Casio instead?

    Bonus: Everybody in the industry knew how many transistors were in Casio and Sharp's products. Moravec's paradox can be numerically estimated. Moore's law gives an estimate for how many transistors can be fit onto a chip. This is why so much sci-fi of the 80s and 90s suggests that we will have a robotics breakthrough around 2020. We didn't actually get the breakthrough IMO; Moravec's paradox is mostly about kinematics and moving a robot around in the world, and we are still using the same kinematic paradigms from the 80s. But this is why bros think that scaling is so important.

  • Wolfram has a blog post about lambda calculus. As usual, there are no citations and the bibliography is for the wrong blog post and missing many important foundational papers. There are no new results in this blog post (and IMO barely anything interesting) and it's mostly accurate, so it's okay to share the pretty pictures with friends as long as the reader keeps in mind that the author is writing to glorify themselves and make drawings rather than to communicate the essential facts or conduct peer review. I will award partial credit for citing John Tromp's effort in defining these diagrams, although Wolfram ignores that Tromp and an entire community of online enthusiasts have been studying them for decades. But yeah, it's a Mathematica ad.

  • There are many such terms! Just look at the list of articles under "See Also" for "The Emperor's New Clothes". My favorite term, not listed there, is "coyote time": "A brief delay between an action and the consequences of that action that has no physical cause and exists only for comedic or gameplay purposes." Closely related is the fact that industries don't collapse when the public opinion shifts, but have a stickiness to them; the guy who documented that stickiness is often quoted as saying, "Market[s] can remain irrational a lot longer than you [and I] can remain solvent."

  • Fuck, your lack of history is depressing sometimes. That Venn diagram is well-pointed, even among people who have met RMS, and the various factions do not get along with each other. For a taste, previously on Lobsters you can see an avowed FLOSS communist ripping the mask off of a Suckless cryptofascist in response to a video posted by a recently-banned alt-right debate-starter.

  • Since appearing on Piers Morgan’s show, Eric Weinstein has taken to expounding additional theories about physics. Peer review was created by the government, working with Ghislaine Maxwell’s father, to control science, he said on “Diary of a CEO,” one of the world’s most popular podcasts. Jeffrey Epstein was sent by an intelligence agency to throw physics off track and discourage space exploration, keeping humanity trapped in “the prison built by Einstein.”

    Heartbreaking! Weinstein isn't fully wrong. Maxwell's daddy was Robert Maxwell, who did indeed have a major role in making Springer big and kickstarting the publish-or-perish model, in addition to having incredibly tight Mossad ties; the corresponding Behind the Bastards episodes are subtitled "how Ghislane Maxwell's dad ruined science." Epstein has been accused of being a Mossad asset tasked with seeking out influential scientists like Marvin Minsky to secure evidence for blackmail and damage their reputations. As they say on Reddit, everybody sucks here.

  • I think that you have useful food for thought. I think that you underestimate the degree to which capitalism recuperates technological advances, though. For example, it's common for singers supported by the music industry to have pitch correction which covers up slight mistakes or persistent tone-deafness, even when performing live in concert. This technology could also be used to allow amateurs to sing well, but it isn't priced for them; what is priced for amateurs is the gimmicky (and beloved) whammy pedal that allows guitarists to create squeaky dubstep squeals. The same underlying technology is configured for different parts of capitalism.

    From that angle, it's worth understanding that today's generative tooling will also be configured for capitalism. Indeed, that's basically what RLHF does to a language model; in the jargon, it creates an "agent", a synthetic laborer, based on desired sales/marketing/support interactions. We also have uses for raw generation; in particular, we predict the weather by generating many possible futures and performing statistical analysis. Style transfer will always be useful because it allows capitalists to capture more of a person and exploit them more fully, but it won't ever be adopted purely so that the customer has a more pleasant experience. Composites with object detection ("filters") in selfie-sharing apps aren't added to allow people to express themselves and be cute, but to increase the total and average time that users spend in the apps. Capitalists can always use the Shmoo, or at least they'll invest in Shmoo production in order to capture more of a potential future market.

    So, imagine that we build miniature cloned-voice text-to-speech models. We don't need to imagine what they're used for, because we already know; Disney is making movies and extending their copyright on old characters, and amateurs are making porn. For every blind person using such a model with a screen reader, there are dozens of streamers on Twitch using them to read out donations from chat in the voice of a breathy young woman or a wheezing old man. There are other uses, yes, but capitalism will go with what is safest and most profitable.

    Finally, yes, you're completely right that e.g. smartphones completely revolutionized filmmaking. It's important to know that the film industry didn't intend for this to happen! This is just as much of an exaptation as captialist recuperation and we can't easily plan for it because of the same difficulty in understanding how subsystems of large systems interact (y'know, plan interference.)

  • I'm gonna start by quoting the class's pretty decent summary, which goes a little heavy on the self-back-patting:

    If approved, this landmark settlement will be the largest publicly reported copyright recovery in history… The proposed settlement … will set a precedent of AI companies paying for their use of pirated websites like Library Genesis and Pirate Library Mirror.

    The stage is precisely the one that we discussed previously, on Awful in the context of Kadrey v. Meta. The class was aware that Kadrey is an obvious obstacle to succeeding at trial, especially given how Authors Guild v. Google (Google Books) turned out:

    Plaintiffs' core allegation is that Anthropic committed largescale copyright infringement by downloading and comercially exploiting books that it obtained from allegedly pirated datasets. Anthropic's principal defense was fair use, the same defense that defeated the claims of rightsholders in the last major battle over copyrighted books exploited by large technology companies. … Indeed, among the Court's first questions to Plaintiffs' counsel at the summary judgment hearing concerned Google Books. … This Settlement is particularly exceptional when viewed against enormous risks that Plaintiffs and the Class faced… [E]ven if Plaintiffs succeeded in achieving a verdict greater than $1.5 billion, there is always the risk of a reversal on appeal, particularly where a fair use defense is in play. … Given the very real risk that Plaintiffs and the Class recover nothing — or a far lower amount — this landmark $1.5 billion+ settlement is a resounding victory for the Class. … Anthropic had in fact argued in its Section 1292(b) motion that Judge Chhabria held that the downloading of large quantities of books from LibGen was fair use in the Kadrey case.

    Anthropic's agreed to delete their copies of pirated works. This should suggest to folks that the typical model-training firm does not usually delete their datasets.

    Anthropic has committed to destroy the datasets within 30 days of final judgement … and will certify as such in writing…

    All in all, I think that this is a fairly healthy settlement for all involved. I do think that the resulting incentive for model-trainers is not what anybody wants, though; Google Books is still settled and Kadrey didn't get updated, so model-trainers now merely must purchase second-hand books at market price and digitize them, just like Google has been doing for decades. At worst, this is a business opportunity for a sort of large private library which has pre-digitized its content and sells access for the purpose of training models. Authors lose in the long run; class members will get around $3k USD in this payout, but second-hand sales simply don't have royalties attached in the USA after the first sale.

  • It's worth understanding that Google's underlying strategy has always been to match renewables. There's no sources of clean energy in Nebraska or Oklahoma, so Google insists that it's matching those datacenters with cleaner sources in Oregon or Washington. That's been true since before the more recent net-zero pledge and it's more than most datacenter operators will commit to doing, even if it's not enough.

    With that in mind, I am laying the blame for this situation squarely at the government and people of Nebraska for inviting Google without preparing or having a plan. Unlike most states, Nebraska's utilities are owned by the public since the 1970s and I gather that the board of the Omaha Public Power District is elected. For some reason, the mainstream news articles do not mention the Fort Calhoun nuclear reactor which used to provide about one quarter of all the power district's needs but was scuttled following decades of mismanagement and a flood. They also don't quite explain that the power district canceled two plans to operate publicly-owned solar farms with similar capacity (~600 MW per farm compared with ~500 MW from the nuclear reactor), although WaPo does cover the canceled plans for Eolian's batteries, which I'm guessing could have been anywhere from 50-500 MWh of storage capacity. Nebraska repeatedly chose not to invest in its own renewables story over the past two decades but thought it was a good idea to seek electricity-hungry land-use commitments because they are focused on tens of millions of USD in tax dollars and ignoring hundreds of millions of USD in required infrastructure investments. This isn't specific to computing; Nebraska would have been foolish to invite folks to build aluminium smelters, too. Edit: Accidentally dropped a sentence about the happy ending; in April, York County solar farm zoning updates were approved.

    If you think I'm being too cynical about Nebraskans, let me quote their own thoughts on solar farms, like:

    Ag[ricultural] production will create more income than this solar farm.

    [York County is] the number one corn raising county in Nebraska…

    How will rotating the use of land to solar benefit this land? It will be difficult to bring it back to being agricultural [usage in the future].

    All that said, Google isn't in the clear here. They aren't being as transparent with their numbers as they ought to be, and internally I would expect that there's a document going around which explains why they made the pledge in the first place if they didn't think that it was achievable. Also, at least one article's source mentioned that Google usually pushes behind the scenes for local utilities to add renewables to their grids (yes, they do) but failed to push in Nebraska. Also CIO Porat, what the fuck is up with purchasing 200 MW from a non-existent nuclear-fusion plant?

  • Sibling comment is important recent stuff. Historically, the most important tantrum he's thrown is DJB v USA in 1995, where he insisted that folks in the USA have a First Amendment right to publish source code. He also threw a joint tantrum with two other cryptographers over the Dual EC DRBG scandal after Snowden revealed its existence in 2013. He's scored real wins against the USA for us, which is why his inability to be polite is often tolerated.

  • They’re objects! They’re supposed to be objectified! But I’m not so comfortable when I do that, either.

    Thank you for being candid and wrestling with this. There isn't a right answer. Elsewhere, talking directly to AI bros, I put it this way:

    Nobody wants to admit that we only care whether robots aren’t human because we mistreat the non-humans in our society and want permission to mistreat robots as well.

    I was too pessimistic. You're willing to admit it, and I bet that a bunch of other folks are, too. I appreciate it.

  • Unironically, Joe Rogan and Elon Musk (and IIRC Kanye West) used the death of Harambe to spread conspiracy theories. They use a playbook designed by Steve Bannon:

    1. Y'know, this awful thing happened
    2. The people in charge didn't handle it well
    3. There's a reason for this: conspiracy
    4. You know who is behind this? It's Ethnic Outgroup! They are the true villains, that Ethnic Outgroup, they're behind the conspiracy
    5. I want you to get up, go to the window, and yell "I'm mad as hell and I'm not gonna take it anymore"