All definitely not metric as metric uses steps of 1000 (and there's also 10 and 100 and 1/10th and 1/100th but that doesn't extend to 10000 and 1/10000th).
The KiB, MiB, etc, the 2^10 scale is called binary prefixes (as opposed to decimal prefixes KB, MB, etc) and standardised by the IEC.
And while the B in KiB is always going to mean eight bits it's not a given that a byte is actually eight bits, network people still use "octet" to disambiguate because back in the days there were plenty of architectures around with other byte sizes. "byte" simply means "smallest number of bits an operation like addition will be done in" in the context of architectures. Then you have word for two bytes, d(ouble)word for four, q(uad)word for eight, o(cto)word for 16, and presumably h(ex)word for 32 it's already hard to find owords in the wild. Yes it's off by one of course it's off by one what do you expect it's about computers. There's also nibble for half a byte.
EDIT: Actually that's incorrect word is also architecture-dependent, the word/dword/qword sequence applies to architectures (like x86) which went from being 16-bit machines to now being 64 bit while keeping backwards compatibility. E.g. RISC-V uses 32-bit words, 16 bits there are a half-word.
The bit, at least, is not under contention everyone agrees what it is. Though you can occasionally see people staring in wild disbelief and confusion at statements such as "this information can be stored in 1.58 bits". That number is ~ log2~ 3, that is, the information that fits in one trit. Such as "true, false, maybe".