Share 60 Seconds Into the Future
Share to email
Share to Facebook
Share to X
By The Future Futurist
The podcast currently has 10 episodes available.
Cryptocurrency—sometimes referred to as “bit-currency”—uses a blockchain to record financial transactions using a public ledger. Although this can be tied to a fiat currency like the US dollar or British pound, standalone cryptocurrencies like BitCoin and Ether have become a popular way to transfer money without using government backed authorities.
This can be confusing to many people, since they assume that money has to be backed by something physical like gold. But, in fact, The US dollar, and most governmental created currencies, have not been backed by anything other than the name of the government in several decades. This means that almost all currencies are based on trust in the institutions that created them.
With BitCoin and other cryptocurrencies, this trust is in the strength of the cryptography to keep the records safe and private, and their ability to be exchanged for more commonly accepted currencies like dollars and pounds.
However, as these currencies become increasingly popular, it is likely they will be accepted directly without the need to convert them to fiat currencies.
A blockchain is a cryptographically encoded database that is publicly shared. Imagine a spreadsheet, where each page is encrypted so that only the appropriate parties can view the un-encoded data, but anyone can have a copy of that encoded data. Each page in the spreadsheet is its own record, called a block, with a link, called a hash, back to the previous block.
This publicly shared spreadsheet of information is called a distributed ledger. Anyone can have a copy of the encoded ledger and add to it. By having multiple copies, this ensures that previous blocks in the chain cannot be altered. Any changes to one copy will be called out as inconsistent in the other copies and overridden with the original data, preventing falsified records.
Crypto has become the colloquial term and prefix for any technology using cryptography to aid any or all of its abilities.
Cryptography is the study and practice of how to send secure communications between two points while preventing any 3rd parties from intercepting and/or reading that information.
This requires encoding the information, so that only a specific key can be used to decode the message, rendering it gibberish to anyone without that key. The key might be a physical device or, in modern times, computer software that encrypts the information on one side and then decrypts it for the receiving party.
Digital Cryptography has become crucial for modern computer communication, ensuring not only greater privacy but also that the information has not been tampered with or changed in route.
Machine Learning or ML uses algorithms and statistical models that allow a computer to progressively improve its performance for a specific task . This is critical for creating more useful and realistic AI machines.
For example, ML systems are used to filter “spam” emails, learning not only from you, but from other users, which emails are acceptable and which are not. More extensively, ML is used for learning our routines in order to make traveling suggestions, and, in the future, may actually be used with AI to drive our cars.
However, ML can also be used for censoring content deemed offensive by a controlling party, learning your purchasing habits to promote products to you, and delivering propaganda that is most likely to sway your opinions.
Although widely used by modern applications from email to computer security, we are often unaware of the invisible hand of ML systems in digital communication affecting what we see, what we do, and what we hear.
Artificial Intelligence or AI, has as its goal to allow computers to perceive and learn from their environment in order to adapt to situations and better solve problems without direct human input .
When talking about “Artificial” Intelligence, many people assume that the intelligence we are trying to create will be recognizable as human intelligence, as often portrayed in science-fiction.
However, AI can not create computers intelligent like a human being, but instead will allow them to acquire and apply knowledge and skills, while appearing to be intelligent to humans. This is an important distinction. The ability sound human is is not actual critical artificial intelligence, only for our ability to interface with computers.
The best an AI system can hope to achieve, then, is an ersatz human intelligence. It will be faster at certain computational tasks, but that intelligence is a simulacra, meant to fool us into thinking we are interacting with a consciousness similar to our own.
In fact, AI will always be as alien as any bugged eye monster from science fiction.
Creativity, or creative thinking, is not a skill that many people believe they have. Instead of a skill to be practiced, many believe it is a talent you are either born with or not. Like art, when it comes to being creative, many people agree “I don’t know much about it, but I know what I like.”
Simply stated, creativity is the ability to take information from a variety often disparate sources and meld those ideas into new and novel ways of thinking about or solving a problem. Although computers are replacing humans for many tasks, creativity is something that computers cannot do, at least not in the way that we think of as human creativity.
In the article “The 10 skills you need to thrive in the Fourth Industrial Revolution,” the World Economic Forum predicts that creativity will go from being the tenth most important job skill in 2015 to being the third most important job skill by 2020.
According to the article: “Robots may help us get to where we want to be faster, but they can’t be as creative as humans (yet).”
Mixed Reality — or MR — takes AR one step further by detecting the participant’s physical environment and allowing the digitally generated reality to interact with that environment. Thus, unlike AR, where virtual objects can only sit on top of the background, with MR, virtual objects can move around and behind those objects.
MR differs from AR in that it requires a device that cannot only show the physical environment, but can also scan that environment to detect solid objects and their dimensions. Since this requires extra hardware, most commonly available devices that can do AR, are not candidates for MR.
The best example of the promise MR might offer in the future comes from the company Magic Leap. They are offering a headset that can scan the environment and add elements in and around scanned objects.
One important note: Because of their similarity, it is likely that the concepts of AR and MR will soon be collapsed into one concept, with people generally referring to AR for both.
Augmented Reality — or AR — places a digitally rendered layer over the vision of the participant. This means the viewer can still see their surroundings, unlike VR which completely replaces their view with a digital view. Although the digitally augmented reality may seem to fit on top of the real scene, it cannot truly interact with the objects being viewed.
AR can use hand-held digital devices, with front facing cameras, such as mobile phones and tablets. The camera will take in the surrounding live scene, and then place the virtual layer on top in real-time.
The most commonly seen example of AR is Pokémon Go, which allows players to use a mobile device’s camera to look forward, while the app places characters over top on the screen for them to interact with.
However, like VR, AR can also make use of eye-ware devices to create a more immersive experience, but one that does not require anything as bulky or invasive as VR headsets.
Virtual Reality — or VR — is a fully immersive digital experience, that completely replacing all visual and audio input for the individual . In other words, the outside world is completely obscured and replaced with a computer rendered world from the user’s point-of-view.
Often associated with the Cyberpunk genre, The promise of VR can be seen in movies like The Matrix and Ready Player One or in books like Neuromancer by William Gibson’s Neuromancer or Snow Crash by Neal Stephenson. Unfortunately, these representations generally show a negative view of the technology as it pushes people further apart in the real world.
Currently, VR requires an eye-ware device to be worn, generally resembling large ski googles, to place a screen in front of the viewer that blocks all external light and sound.
The digital world in VR can be as real or as fantastic as desired, allowing the user to walk around and engage with solid objects or fly and walk through walls. The physics of this world are completely controlled by the computer.
The promise of immersive digital worlds—where anything is possible—is as enticing as it has been long in coming. The concept has been around in science-fiction for decades. The promise was made in books like Neuromancer and Snow Crash, TV shows like Doctor Who and Star Trek: The Next Generation, and movies like The Matrix and Johnny Mnemonic.
However, practical application of the most well known digital reality, “Virtual Reality”—or VR— has always seemed to be just on the horizon. We’ve heard the promise that it’s just “a few years away” at least since the early 1990s, but it has never achieved widespread adaption.
Although the technology was there, and there were many attempts to bring VR to the public, no one was ready to start wearing the bulky hardware on a daily basis.
Recently, though, with improvements in computer speeds, wearable screens, and the popular acceptance of wearable computer technology, creating immersive computer generated realities finally seems to be arriving.
However, rather than VR, we are seeing more promise in Augmented Reality and Mixed Reality. Although all three techniques present a digitally rendered experience shown from the participant’s point-of-view, they do so in unique ways.
The podcast currently has 10 episodes available.