The reason, I suspect, is fundamentally because there’s no relationship between the uppercase and lowercase characters unless someone goes out of their way to create it. That requires that the filesystem contain knowledge of the alphabet, which might work if all you wanted was to handle ASCII in American English, but isn’t good for a system which needs to support the whole world.
In fact, the UNIX filesystem isn’t ASCII. It’s also not unicode. UNIX uses arbitrary byte strings, with special significance given to a very small number of bytes (just ‘/’ and ‘\0’, I think). That means people are free to label files in whatever way they like, and their terminals or other applications are free to render them in whatever way seems appropriate, without the filesystem having to understand unicode.
Adding case insensitivity would therefore actually be significant and unnecessary complexity to add to the filesystem drivers, and we’d probably take a big step backwards in support for other languages
No, I’m arguing that the extra complexity is something to avoid because it creates new attack surfaces, new opportunities for bugs, and is very unlikely to accurately deal with all of the edge cases.
Especially when you consider that the behaviour we have was established way before there even was a unicode standard which could have been applied, and when the alternative you want isn’t unambiguously better than what it does now.
“What is language” is a far more insightful question than you clearly intended, because our collective best answer to that question right now is the unicode standard, and even that’s not perfect. Making the very core of the filesystem have to deal with that is a can of worms which a competent engineer wouldn’t open without very good reason, and at best I’m seeing a weak and subjective reason here.