I recently got an email from my father about Googles recent Quantum Computing breakthrough:
“Yet another technology that most assuredly have negative consequences and go under or unregulated. Please excuse my cynicism. What do you think about this general development?”
I can absolutely see why he would feel that way, and he could be absolutely correct.
However, as a technologist who remains optimistic in the face of the Amazon + NSA partnerships forged in hell, big data, and everything else, I wanted to see if I couldn’t explain WHY the heck I can be optimistic when it looks plainly like the Zuckerbergs of the world are going to turn us all into some sort of fat, obsolete, blobs barely recognizable as humans.
I work in software and recently have been focusing on the Blockchain space, and other decentralizing technologies. This is where the hope lies. Because a decentralized world has no central authority to manipulate and control us in the first place. But there’s a lot to understand to be able to imagine what that means. It has taken me years. I tried to distill this down in my email response to my father:
From what I understand we are still a good decade away from practical applications to quantum computing. The potential negatives and positives of real, practical quantum computing are hard to imagine.
One thing they could help with a LOT is doing giant-scale simulations to try to understand the environmental impacts of doing different things. It could also be a path to artificial general intelligence, which of course could be the best or worst (or both) thing ever to happen for people.
I understand, but wouldn’t advise cynicism, partially because it won’t slow anything anyway, and also because its like having a cynical attitude towards the invention of Calculus. No one knows where it will take us, we only know it’s taking us somewhere different.
IPFS
However, I have been learning about an exciting new technology with immediate practical use, called the InterPlanetary File System, which has the promise to replace the existing internet infrastructure in a completely decentralized way. Zero data centers needed. The question of “Why would Google allow this to happen” is answerable by the fact that this kind of infrastructure is vastly more efficient.
Think of it this way. Say you have a file and you want to give it to me. We are in the same small hut in rural Guatemala. Under the current architecture of the web, your phone needs to send this to a server hundreds to thousands of miles away, and then back to me, even though I’m right next to you. If that server is down, you, me, and millions of others are out of luck
IPFS would allow you to send it directly to me. Not only that but if Guatemala’s connection to the rest of the WWW was severed, everyone in Guatemala would still be able to communicate with each other.
It gets better. (this gets a little technical, but I’m sure you can understand this. Feel free to ask me any questions you like)
Addresses are based on Data, not third party Domain Registries
Instead of using a web address to point to where data lives, you use a fingerprint of the data itself. No one has to own the domain name, and no one can remove the data so you go to the address and the data is gone. A fingerprint of the data points to the data itself, wherever it lives!
Where it lives is on whoever’s personal computer or hard drive who wants to store it. The protocol will find the nearest computer or computers to yours to download from when you want a file, video, website, image, or anything else.
It continues to get better.
Right now on the internet, if you publish a book, and then come out with a new version where say, just 15 of 1000 pages are edited, we need to copy the whole book. IPFS, on the other hand, uses the same highly efficient “versioning” software that programmers including myself use, called Git. So instead of the new version being an entirely new copy that duplicates 985 pages, you simply create 15 new pages, and the software automatically grabs the original 985. If you want the new version, it will combine those. If you want the old version, it will simply deliver the original 1000 pages.
Since the address is simply a fingerprint of the data itself, all you need to know is that fingerprint, be it of the original or of the new version, and your computer will collect the needed data from nearby computers.
It gets even more space-efficient than that!
Data is broken up into small, easy to transfer packets. This is common practice for HTTP, torrents, and pretty much all internet protocols. However, on IPFS, not only can you avoid duplicating data from one version of a file to another, but across different files. The whole system is one big system of packets that never need to be duplicated.
In this scenario, you are in Colorado and I am in Guatemala.
Imagine you made a film, and you want to make it available on the IPFS web. You don’t need to “upload” it anywhere because there are no servers. You simply put it on your computer and deploy it to the IPFS web. By doing this, the software will break down your film into probably something like 10s or 100s of thousands of packets. Maybe more.
Now imagine that several of those packets represent a black screen with no sound. A few seconds at the beginning and end of the film, and maybe a couple of seconds or fractions of a second in the middle. Those packets are exactly the same as packets representing a black screen from any other video file on the planet.
Now say I want to watch your movie. I’m the first person to do this, so right now the film is ONLY on your computer. I can put in the address of the film ( a fingerprint of the data in the film) into my computer. It will begin to download, or can even stream, the film from your computer directly to mine, packet by packet. Each of those packets has its own fingerprint/ address that is used to locate the data on a computer nearest to mine.
So, for each packet, it will look for the closest computer to mine to get that data. For most of the packets, they will be unique to this film, so they will have to come all the way from CO to Guat. But for many, like the black screen packets, there will likely be a computer closer to me that is serving the “black screen” packet. So it won’t bother going all the way to your computer for those.
As this system grows, more and more packets will be shared between different files.
This creates two pretty amazing outcomes.
1.
As your film got viewed more times, more users in the IPFS web would want to download the film, and it would become spread over the system. The more people want to store, or even just stream, your film, the faster it becomes for other people to watch it!
For this reason, big companies will have no choice but to embrace this technology. They would save billions of dollars broadcasting huge live events like the Superbowl this way, using our hardware to broadcast, rather than spending tons of money on a centralized server that can handle 10s of millions of computers all demanding the same data from this same server at the same time. Right now they have to guess how many people will watch, and pay high prices for just over that number. If too many people tune in, they have a bandwidth problem. If too few tuned in, they paid all this money for servers that didn’t get used.
So this new tech is better for everyone.
2.
In theory, eventually the system could become so data-rich that I could create and add some new, unique file; something simple, like an image or a PDF, and the system could actually already have every packet needed to construct the final product in the system. All I would really be adding would be a new digital signature that represented how all the data needs to be stitched together to create the final project.
In other words, I could upload a PDF, and share the fingerprint, and other people across the world could begin downloading the PDF file using that fingerprint, but with all of the packets being loaded from computers closer to them than my own. They could end up owning the file without downloading a single packet of data from my computer. Just by downloading duplicate copies of small pieces of data that together form an identical document.
Because of the way addressing works with a fingerprint of the actual data, you can actually be absolutely certain that you are receiving the exact data you asked for. Nothing less, nothing more.
Nowhere to spy from
Now, with a web built like this, there are zero centralized pipelines that the NSA and Amazon or whoever can tap and suck up all of the data. And if Google does want to store tons of data on its own huge servers, they can go right ahead. All they will be doing is speeding up the web for everyone else by providing another place for files to live. But their servers are not more or less important to the network than my laptop, other than that they can hold more data.
If you need data to be secure, it can be encrypted and work just as well.
When you consider the fact that the data can also itself be software, the implications become staggering.
Blockchain
All of this is different than Blockchain technology. Blockchain technology is somewhat similar, in that it involves computers talking directly to eachother. But blockchain is much less efficient, and much more secure. With blockchain, instead of sharing data across the network, every piece of data has to exist on every node in the network. You have probably heard of this idea. With bitcoin, every bitcoin miner downloads the entire transaction history of the entire chain. As every block of new transactions is added, all of the miners check in with each other and reach a consensus that the new block is legit and that no one has messed with the system.
With Ethereum, the world’s second-largest blockchain and cryptocurrency, you can do more than just handle transactions. You can also run code, called smart contracts, which is what I’ve been learning to design and build. A smart contract must be executed on every Ethereum miner’s computer so there can be a consensus as to the outcome of the code.
Obviously this is very inefficient to have thousands of computers all running the same code or processing the same transactions. However, this is what makes it so secure. There is no central authority needed, instead, you can mathematically guarantee that the money you sent to me actually belonged to you in the first place. Or that I have paid for $200 of electricity, and I have received $150, therefore I should get another $50 worth of electricity before I get charged again.
The efficiency thing is an issue, but many of the world’s best engineers are figuring out how to keep the same mathematical certainty while shrinking the amount of data and processing power needed. For example, there is a hot new thing called a Zero Knowledge Proof, which is a cryptographic algorithm that can verify that I sent $99 dollars to you for sure, without knowing who I am, who you are, or how much was sent. It can just verify that “whoever sent it sent the right amount to whoever was supposed to get it.”
With innovations like this, the size of “all of the transactions ever” becomes smaller and smaller, and takes less and less compute power.
Web 3.0
Now combine all of these technologies, and you have what the web will look like in ~2025. Everything that doesn’t require this high level of security and accuracy lives and breaths on the IPFS. Everything that does need it is on the blockchain. Big companies can create their own private blockchains or try to keep their old systems alive all they want. But the new stuff will be mostly built on these new systems.
What this essentially could do is eliminate the most expensive and controlling gatekeepers and middlemen from commerce. What’s left is a vastly more efficient and well-oiled machine for the end-user: producers and consumers of content, knowledge, services, etc.
This is a massive shift towards decentralization like the world has never seen before and will change everything about power and economics and how the world works.
What we are talking about is essentially the world running on a global supercomputer that everyone has equal access to.
In Conclusion
If Google can manage to do something useful with their quantum computer, they have a shot at staying relevant. The push and pull of power continue. But if some organization connects its quantum computer to the network. Now we all have a quantum computer as part of the network. If the others follow suit
Add AI to the mix, and god knows. We could very easily have a system that operates so fluidly and efficiently that we can’t even conceive of it today.
All that is to say, that I think that while cynicism is an appropriate take on many issues of our day, technology is not one of them. Why? As networks and systems become more complex, if you want efficiency, you have to sacrifice control. While I’m sure the new tech elite do enjoy their control, they didn’t get there by seeking control. They got there by building more efficient systems. Having spent 6 years in the tech world, I feel confident that most of the tech elite are more passionate about efficiency than they are about being oligarchs. And even if they aren’t, there are currently an estimated 18.2 million software developers who aren’t in the elite, and yet can still build these new systems.
Love ya,
Alex
Source: Crypto New Media