Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CA
Posts
0
Comments
100
Joined
1 yr. ago

  • How do you know what's it supposed to do, if no one actually wrote that down, other than

    As a person.
    I would like it to work
    So i can do the things.

    To be fair, at least that's something...

    Or maybe for testing the documentation is the code. The code does this, write a test that accepts it does this.

    I like the concept of describing things in scenarios and having data objects embedded in the scenarios. I think gherkin if a bit too restrictive, the same way user stories are, but a more natural verbose scenario that was parameterised with variables tied to actual data makes it explicit what is supposed to happen and what data the system will consume, create or manipulate.

    E: there is of course other types of documentation available

  • Really depends how you measure the economy. Gross national happiness seems like better way to judge the health of an economy than GDP, which has little bearing on the state of most people's lives.

    Humans make all this shit up, line goes up is a completely valid retort to how the economy is being mismanaged, because it is what is seemingly most important regardless of the quality of people's lives.

    Saying if the line didn't go up, people's live would be worse is true, but only because of who we are letting rule the playground, i.e. if they don't have all the toys then nobody is getting anything.

  • Cool! This seems like an good write up on it

    https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-control-84c9c11dc3a9

    Bottleneck Bandwidth and Round-trip propagation time (BBR) is a TCP congestion control algorithm developed at Google in 2016. Up until recently, the Internet has primarily used loss-based congestion control, relying only on indications of lost packets as the signal to slow down the sending rate. This worked decently well, but the networks have changed. We have much more bandwidth than ever before; The Internet is generally more reliable now, and we see new things such as bufferbloat that impact latency. BBR tackles this with a ground-up rewrite of congestion control, and it uses latency, instead of lost packets as a primary factor to determine the sending rate.

  • This is pretty funny, kinda suggests they have no faith in the engineers they work with... ffmpeg is an awesome piece of work, but if it's a bug they can repeat to some level, then like you said, it 100% a them problem!

    E: oh, was thinking it was a pm raised it, but seems it was possibly one of their developers, brutal....