Buckle Up
How will WiFi look like in the 2040s?
Recent electrical engineering conferences like DesignCon or Sensors Converge show that they will use higher bandwidths.
The direction shows that new connectors will support terahertz frequencies. The interesting fact is that such wavelengths will soon pass the wavelength of molecules and atoms. What is beneath is a deep blue ocean of ether that likely drives radiation with less reflections and interferences from material.
Copyright© Schmied Enterprises LLC, 2023.
It is amazing that WiFi was just 15 MBps twenty years ago. We can easily drive 1 GBps today at home, almost a hundred times more.
The reliability of devices is still an issue. The author has to change cable modems every two to three years or so due to incompatibility issues.
Unfortunately, current intellectual property, the blocks of hardware that are usually patented and sold by engineering companies such as ARM, cannot handle many changes in the networks.
Compatibility is a limitation for such devices. If we want to scale to ten to fifteen internet-of-things devices per person in the next decade then we need strong standards and lower cost semiconductors.
The author thinks that a possible solution is making the end-user devices simple by capturing the analog signals and transmitting them to the cloud without any interpretation. The cloud can then do the digital to analog conversion, authentication and authorization. It would be so easy to enable or disable your new TV from your cloud screen over your phone when you just plug it in.
This may not work for defense grade equipment. Most consumer traffic goes to the cloud anyways. The logic suggests that standards will soon diverge.
Processor development is another interesting topic of the near future. It is clearly visible that the market suggests the expansion of vector processors based on the share prices of NVidia, a vector processor manufacturer, and Intel, ARM, two classic single instruction single data processor makers.
What are the reasons? The register set designed for the 8086 class of Intel and AMD processors are inefficient for the modern artificial intelligence and graphics use cases. Moreover, intellectual property, the blocks of hardware designs sold separately by the likes of ARM, Intel, AMD, NVidia, etc., need to be placed next to each other on the silicon. This reduces the utilization of the chip surface making it more expensive to build and more complex to verify.
It is also very difficult to do forensics catching any intellectual property violations in such an environment. This makes the industry more concentrated, more vulnerable to government influence, and more expensive to handle any callbacks.
This limits the number of semiconductor companies and the number, scale and diversity of internet-of-things devices available.
The author thinks that field programmable vector processors are the solution.
Vector processors have a unified instruction set over large unified blocks of data such as graphics, or artificial neurons. These are less complex, increasing silicon utilization.
Vectors are better for artificial intelligence and graphics use cases than many special registers of current ARM and Intel instruction sets.
Registers are imbalanced using sometimes just the lower part for any logic in modern applications. Most significant parts of registers are eliminated in vector processing, where they are not needed.
Any shared instructions of vectors remove the need for large non-volatile random access memory needs that carry the instructions for field programmable arrays today.
Vectors handle the two to our byte units of large language model tokens or true color graphics pixels better than expensive CPU registers.
Parallel vector sizes that are available today in Intel class processors XMM instruction set can be applied easily to vector processors at scale allowing 512 or 1024 byte vectors of 128 to 1024 per silicon depending on the budget. The unified simple architecture of vector processors may allow chips of $1 grade for heavy computing needs.
Superparallel execution of such vector processors may lower clock frequencies making these processors less energy hungry and resource intensive. It can also handle images better that are generated by generative logic vs. copied graphics. Copied graphics were used in the old days. An expensive dynamic random access memory was necessary to handle them.
Such vector processors may be able to use programmable but persistent logic across a shifting window over buffers to handle generation, transformation, and compression tasks with low costs with ease.
Many companies can do many architectures of course. The challenge is interesting.