One of the powerful things about the SL/Opensim system is the fact that it makes few assumptions about how the system should be used. Personally, I’d prefer a system which makes even fewer assumptions, but that’s a rant for another day. In order to accommodate this flexibility, the system stores very few bits of content locally (just the avatar mesh and some basic textures, I believe). Everything else is fetched from the servers at the point when the user enters within visual range of the media. By doing things this way, anyone with an idea and access to the tools can extend the world and create new content. There’s a lot which is really great about this model, but some things which are really horrible about it too.
Take this sign for example. The entire front of the sign is a texture, a image file that’s been uploaded to the server and applied to the front face of the sign object. The majority of information the sign is trying to communicate isn’t complex at all, just some text on a flat-colored background. That information is the kind of thing you generally don’t want to have your users waiting around to load. But alas, because the information is baked into a image file, we have to wait for the image file to load in order to read it’s message.
This is really, REALLY bad user experience. Most new users don’t understand why everything looks so blurry and blank, and many don’t have the patience to wait for every texture to load just so they can read some basic text. This is not the technology conforming to the needs of the user, it’s the other way around.
Streaming media has an immense amount of potential, but it has to be used right. If Opensim(and SL, but I highly doubt they’ll ever change) don’t figure out a way around problems like these, the platform will forever be limited to those with the patience to wait for simple text to emerge from the murk, and the devout faith that what that text says will be worth it.
- realavatarsofgenius reblogged this from metapundit and added: