You are not logged in.
The quote below is taken from a web site set up by a computer security consultant who is pursuing a PhD.
The paper itself is about possible design of a computer architecture that would be more secure than the systems that exist on Earth today.
However, upon reading the paper, I succeeded in convincing myself that this concept might be MOST applicable to an off-Earth community, such as the one(s) proposed for Mars.
The author deliberately pulls back from the ultra-dense architecture developed on Earth in recent years, and instead looks closely at designs that are considered obsolete, but which are still manufactured, in order to design systems that have security features designed in from the start, instead of cobbled together after the fact.
Wakefield Cybersecurity
Preemptive analysis, consulting, and planningCan we make our own CPUs?
Posted Friday, April 11 by Marc Abel
It is disheartening to dream of writing secure software—well organized, succinct, thoroughly validated operating systems and applications—knowing it would have to run on silicon with irreparable, undisclosed, and often deliberately introduced vulnerabilities. I propose a “supply chain firewall” for CPUs and systems that can shield (to a large extent) their purchaser or end user from the mistakes, misdeeds, and misaligned interests of semiconductor manufacturers.
A semiconductor plant costs 1 000 times as much as a pick-and-place assembly line, yet either can build a CPU. CPUs made in a “fab” are cheap in large runs, tiny, and perform computations at great speed. In contrast, CPUs soldered from smaller ICs allow superior process oversight during design and assembly, ability to inspect finished CPUs that does not exist with single-chip processors, affordability in even one-off lots, and more options with respect to assembly plant ownership and siting.In the course of some graduate study, I have been researching the design of end-user-built CPUs, with particular emphasis on arithmetic logic units, or ALUs. The following resource is a snapshot of the work I have done to date, with a lot of technical intricacies to help newcomers to this technology come up to speed quickly. I will be giving a talk about this work in the near future, probably online on account of present conditions. Once the date and time for this talk are known, I will post an invitation on this page.
Elegant ALUs from Surface Amount SRAMs
Marc W. Abel
Department of Computer Science & Engineering
Wright State University
2020 PDFWakefield Cybersecurity LLC
Wake secure℠
I'll post this and then add the link to the paper
Edit#1: http://talk.wakesecure.com/abel-alu-draft-1.pdf
This paper contains content that should be of interest to serious computer science students.
The author invites your (NewMars member) support by posting announcement of the work in your local jurisdictions.
We have potential distribution in: England, Canada, Italy, Spain and numerous regions of the US, as well as other locations on Earth where forum readers reside.
(th)
Last edited by tahanson43206 (2020-04-11 08:31:08)
Offline
tahanson43206,
I'm still reading through it, but I like this guy's thought process. Many very important computing tasks don't require the absurd level of sophistication that modern CPU's have been designed to permit. In point of fact, field-repairable computers for voice radios, life support systems, greenhouses, etc would be preferable to "black box" computer chips that most programmers, myself included, no longer fully understand. If they ever break, the only practical option is often "buy a new one". Firmware exploits are no longer solidly within the realm of three-letter government agencies or the handful of individuals who would either be working for or against them, either. When I still lived in Austin 15 years ago or so, I had a friend involved in testing at a chip fab. By about that time, I could no longer keep up with what they were doing.
I recall making and fixing my own radios as a child. I couldn't do that with modern CPU's running software-defined radios without a deep understanding of the system I'm working on. Our modern CPU's running SDR's are great for reconfigurability and sophisticated data compression, but sometimes I just need to call Bob and tell him that he needs to fix some piece of equipment and that simply doesn't require the power and sophistication of my pocket super computer / iPhone, even though sending a high resolution image of the broken equipment in a reasonable amount of time might.
Offline
For kbd512 re #111
Thank you for your encouraging first look.
An entire industry on Earth could develop out of this, if I understand what it represents correctly.
I ** think ** this concept could be the nucleus of an "Open Source" hardware movement, similar in intent and ultimately in extent to an open source concept in software such as Linux or it's many descendants, such the one I use (Ubuntu) or the one where the local 3D printer group leader is a consultant (Redhat).
I admit that Redhat was purchased by IBM, but the local consultant says he has not seen any signs of changes at Redhat, other than the stability of big pockets.
In the case of hardware .... we are a way off from having an organization built up to the size of any of the major open source foundations, but (I'm pretty sure) they ALL started with just one person and an idea whose time had come.
Edit#1: Here is an update from the author cited in the post above:
I found your New Mars post. Looks awesome. Paradoxically, while pulling back from ultra-dense architecture as you mention, my work does not fully escape ultra-dense fabrication; the ALU of Part 4 uses as many transistors as a first-revision Pentium 4. Rather, it "moves down the food chain" into cheap, "predictable" parts that are produced in huge numbers.
(th)
Last edited by tahanson43206 (2020-04-11 10:49:45)
Offline
I wish I could find the website... there's a guy who's built a (500Hz?) computer entirely out of discrete transistors.
I'd love to know how far relays can go though. In part because I've written a fantasy story where semi-conductors don't work.
repost
Home fabricated Integrated Circuit
He's managed to get it to 1975 state of the art. Which means we can have microcomputers, and more importantly, microcontrollers.
Another option for that of course is printed electronics, though they're a lot less powerful (and a lot less expensive). But perhaps enough for the microcontrollers we'd need, so we can keep the expensive chips for our computers.
Use what is abundant and build to last
Online
reposting
For Terraformer re #8
Thank you ** !! ** for this find!
I'm looking forward to finding out what this technology looks like.
I asked Google what operating systems were current in 1975, at:
https://www.thocp.net/timeline/1974.htmThe CP/M(23) (Control Program/Monitor also: Control Program for Microcomputers) will be designed by Digital Research /John Torode and Gary Kildall(24) for the 8080 chip.
In fact it will be the first operating system for that particular micro processor. Many manufacturers of micro computers adopt this system. it has all characteristics of a well designed program in it: it is small and compact, relatively fast and above all stable. The first operating system to run (almost) independent of a platform.Microsoft DOS was just over the horizon. That suggests to me that continuing development by the gent you've cited should enable/facilitate OS at that level or the Linux equivalent.
Thanks again!
Edit: This work was done by a high school student! He has since left for college, but indicated he hopes to continue working in future.
The completed project is a set of gates on a chip, so it is a LONG way from a programmable computer, but it certainly does show what is possible when working within the limitations available to him.
A generous donor provided an instrument to assist with testing completed chips.
(th)
For Terraformer re #113 ... thanks for the tip!
Actually, relays are quite fast if you build them at the nanometer scale. It's been a while since I looked at Eric Drexler's "Engines of Creation", but my recollection is he described a number of simple machines that performed logical operations at the molecular level.
Decades have passed since Drexler helped to open the flood gates, so it would not surprise me to learn about significant advancements.
Star trek's famous replicators were fantasy of course, but the underlying principle of atom assembly is what nature uses constantly to build trees, plants, animals and people.
I expect we'll see atom placement devices moving from one-in-the-world (in 1960's) to common place kitchen items in future decades. However, commercial applications of such devices should be on the scene in a few years, and they may well be running in laboratories today.
(th)
Offline
Terraformer,
I'm not sure if this is what you're talking about, but this guy built a computer from discrete components:
Then there's this thing:
P3 Orion Top Secret - UNIVAC 1830
For those who are interested in some of the technologies that Marc Abel proposed using to construct modern equivalents of discrete component computers for critical systems:
Learn About Memory Technology and More
I also like this website, which explains some of the performance bottlenecks being experienced by state-of-the-art computing technologies:
RISC vs. CISC - Modern Analogy (2018-08)
Multi-Processors Must Die (2018-08, edit 2018-08-09)
The Case for Single Processor (work in progress)
The overriding point seems to be that the reason we're seeing diminishing returns on faster DRAM and multi-core CISC architecture and higher clock speeds, or even worse- multi-processor systems, is that now that RAM and CPU are so cheap to come by, yet the latency inherent in accessing them hasn't kept up with their astronomical capacity, we have an ever-increasing amount of "waiting" through multiple clock cycles required to fetch from RAM- getting instructions from the chip to the RAM and back through the bus, that when we really start loading up our cores (actually attempting to use most of our CPU's and clock cycles to perform compute-intensive operations that require access to main memory), the latency of the architecture itself is what gets in our way. When the chip isn't being starved for instructions to process due to latency issues, it's mostly just consuming electrical power uselessly while very little actual work is accomplished.
Lessons?:
Something akin to RISC was always going to be required as performed approached the limits of what CISC could complete in a single clock cycle. RISC was forward-looking, but also way ahead of when it would actually be required. Multi-core CISC was a band-aid solution for a problem that's since been solved. Something like 1T SRAM needs to replace DRAM and that SRAM needs to be on the same die with the CPU. For that matter, the GPU needs to be on the same die. 3D NAND flash will also be required to improve storage latency.
Do general purpose cell phone / tablet / laptop / desktop computers need to be built that way? I guess it always depends upon what you're trying to do with it, but probably not. For real time systems / database servers / web servers / modeling workstations / medical equipment, they probably do.
This futuristic interplanetary spaceship concept that I wanted to have built using Starship and low cost electromagnetic launch payload delivery is going to need to use a combination of tech from the 1960's era of discrete components for critical functions such as life support / propulsion management / voice communications (tasks that computers created using discrete components are fundamentally simpler and easier to support in the field on account of technician understanding and field repairability) and state-of-the-art super computers required to try to keep up with the insatiable computing demands of sophisticated sensor arrays and high data rate transfers associated with imaging / navigation / scientific instruments such as high resolution telescopes.
Offline
The problems of control of critical systems by computer have been addressed by the chemical and pharma industries, among others. The popular solution is redundancy! Use of multiple processors all doing the same job allows for detection of failure and continued operation or safe shutdown. Hardware failure is fairly easy to deal with using this method. Software remains the main source of failures.
Offline
This is a follow up to an earlier post about a PhD student whose thesis is design of a robust computer architecture.
http://newmars.com/forums/viewtopic.php … 11#p167311
The gent called me yesterday (in response to email) and provided several updates ...
The thesis defense went well. There is more required to earn the PhD but the primary hurdle appears to have been met.
The gent is using a small grant to build a prototype of the system.
If I understood correctly, there is some interest in the part of an organization ... I couldn't tell if the organization is private or governmental.
There is an application for another (larger) grant in the works.
The business model (as I understand it) is to provide the design as an Open Source license, and then offer consulting services to build the machines.
The engineering model is distinct from the trend in the industry ... instead of designing more and more dense chips, this design features mass produced chips that can be assembled into a powerful processor designed from the outset to be robust in both the anti-hacking sense, and in the radiation resilience sense.
Because governments (and some private corporations) are subject to hacking by determined (very smart) people, an architecture designed from the outset to be resilient in the face of hacking attempts may be of interest.
Processing speed is slow compared to industry benchmarks, but the tradeoff is resilience.
Edit#1: I think I overlooked a detail that should be of interest to Americans ... the design is intended to be manufactured in the United States.
What I'm not clear on at this point is if the United States still possesses the ability to make chips used in the machine. I'll check on that the next time I have a conversation with the student.
(th)
Offline
tahanson43206,
The relative complexity or simplicity, does not determine the security of a system against hacking, and it never has. In point of fact, much simpler systems were, historically, much simpler and easier to hack. Only logical fallacy would ever claim otherwise. For example, the WWII-era German "Enigma" encryption machine was not "less susceptible" to code breaking attempts than an encryption algorithm such as DES that requires more modern solid state electronics to efficiently use. We were able to partially crack their algorithm prior to obtaining a copy of their Enigma machine and the encoding sequences (rotor settings) that they stupidly continued to use after capture of a vessel carrying that machine. Plain and simple operational security cost them their ability to communicate securely. Enigma may have been "less susceptible" to brute force methods available at the time, but a microprocessor would've been able to crack the algorithm Enigma used, in real time. Any claims of security through obscurity should be heavily scrutinized, because they're probably invalid.
We have our own government-run chip fabs and other defense-related electronics manufacturers that can make electronics of whatever description our government requires. Defense contractors favor sourcing certain components from overseas to reduce costs, not because the quality is any better or they can't obtain those components from US sources. They don't want to pay for them, plain and simple, and they make the argument that the tax payer shouldn't have to pay more to employ Americans, which is perhaps the worst argument they've ever presented for doing what they do, as it pertains to sourcing electronics components and computer software engineering expertise. If it were up to me, there wouldn't be any defense use computerized system made with overseas components. There would be no permitted movement of any electronics into or out of secure areas where defense-related hardware or software is under development, either. Similarly, human factors and operational security should be the most heavily scrutinized of all possible ways to subvert any system touted as being "secure" or "more secure than the next system".
Offline
Sounds like they are making a hardware based computer instruction simular to the old and, or, nand ect gate system...of the late 70's...
Offline
For SpaceNut re #326 .... I ** think ** you're close ... I'll see if I can find the link to the paper to confirm what chips are involved.
The difference between this design and some earlier attempts along these lines is that the student (in this case) is particularly focused on Internet security. Security was the ** last ** thing on the minds of the original Internet creators, since they were attempting to do something that had never been done before. It was enough just to get the contraption to work at all, let alone deal with hackers.
Now, 4 some years later, there's been time to rethink the entire structure.
anuary 1, 1983 is considered the official birthday of the Internet. Prior to this, the various computer networks did not have a standard way to communicate with each other. A new communications protocol was established called Transfer Control Protocol/Internetwork Protocol (TCP/IP).
(th)
Offline
Banks used token ring and multi-ring for security which could ride in the FDDI cart with slow ethernet until bandwidth became an issue that the ethernet and fast ethernet made disappear.
Offline
tahanson43206,
I would like to read that paper to identify source manufacturers for the chips.
For use on Mars, I would think that critical systems such as life support, voice, and low data rate communications would be better served by machines that don't have chips to fry with ESD or radiation not blocked by the nearly non-existent magnetic field that Earth benefits from. If all the components are discrete, physically robust, electrically protected, and use much higher operating voltages than prototypical computer components, then minor over-voltages caused by SPE / CME / GCR / ESD should be less of a problem (ESD / EMP hardened by design). I was thinking of discrete components mechanically clamped to thick plastic boards using copper strips (reconfigurable electrical connections for future circuitry upgrades or alterations) laid atop a special type of breadboard, so all components can be quickly and easily removed / replaced by someone wearing space suit gloves. These would be very simplistic computer controls, much heavier than absolutely necessary, but nearly impossible to destroy, unlike integrated circuits. For a moon or Mars base, I want a computer that operates off of a 440V AC bus. I don't care if an iWatch can perform all the functions of a device the size of a refrigerator. The only point that matters, is that one good solar storm can brick that miniaturized wrist-borne computer, with no possibility of repairing it, whereas this computer will still function long after everyone that built it is dead.
See Konrad Zuse's relay-driven computer for what I had in mind:
The Z1: Architecture and Algorithms of Konrad Zuse’s First Computer
The Design Principles of Konrad Zuse’s Mechanical Computers
Offline
The relay driven can still be effected by RF transmission keying of a two way that is in close proximity to the system. I know as I built a relay driven breathing apparatus which was seeing this problem with the chip electronics. It used reed magnetics to sense position for extended or retracted with a center sensor reed switch to allow for determining direction of travel for when a switch to open or close was used to send power to the control relays. Even with a hardened relay control it still would change position.
Offline
For kbd512 re #329
Your interest in this student's work would (most likely) be welcome! There aren't many people on Earth who would be interested at all, let alone know what the student is trying to achieve!
I'll send an email today to follow up, to see if there are any updates!
In the mean time, I'll check to see if the original paper is still online.
Thanks again for your interest .... this particular student started out as an eighth grader who showed up to help with a radio astronomy project at a local university. He's been contributing on and off ever since, but his ** real ** passion appears to be this CPU undertaking.
Edit#1: Here's the original report with link: http://newmars.com/forums/viewtopic.php … 11#p167311
Edit#2: email inquiry transmitted.
(th)
Offline
For kbd512 re #329
The student read your post and SpaceNut's reply.
He said he will think about both posts before attempting a reply. I suspect there may be a gap between what he is designing (and building) and the ideal you have described. After spending years working on something this complex, it might be disappointing to learn that it is not "good enough" for someone.
Of course, that ** is ** how life is .... no matter what an individual human may do, there is always at least ONE person who thinks it could have been better.
(th)
Offline
Electronic, Electrical, Mechanical uses of any device created are always conditional and we do accept there failure to perform. That it can occur no matter what we build is a given. Its through those failures that we learn to create with other ideas and thoughts something that fails less often or not at all.
Remember I have real life experiences coming from the late 70's in the electrical/ electronics fields of manufacturing and design that goes along ways towards understanding what we have gone from with Vacuum tubes, transistors ect all the way through to modern components....
Offline
tahanson43206,
Thanks for following up. Although the security aspect of the proposed design is intriguing, what I find more interesting is designing a reliable / robust / simple computer that can autonomously monitor and regulate critical life support and communications systems functions. The function of this computer would be limited to HF / UHF / VHF voice communications, low-rate data transfer, temperature and pressure control, atmospheric composition regulation, and nuclear reactor monitoring and regulation.
While VLSI microchips function properly, we can and should use those little marvels of modern electrical engineering to reduce power consumption. However, this computer control system would be a more environmentally tolerant, if simplistic and power intensive, backup. It would not be materially affected by CME / SPE / GCR radiation at the levels encountered on Mars, as well as having little possibility of malfunctioning as a result of ESD. This computer needs to function as intended in the absence of active thermal regulation, meaning hard vacuum with +250F/-250F temperatures, even if it's literally buried in regolith, and across the full gamut of temperature ranges encountered on the surface of Mars.
Design Features:
Chassis / Enclosure: 2050-T84 Aluminum-Lithium alloy - selected for fatigue resistance over the design operating temperature range, corrosion resistance, lighter weight compared to 2219 or 5083, and suitability for acting as a thermal sink (will contain SCCO2 in sealed gun drilled channels that function as heat pipes, in order to transfer and radiate thermal energy from the components into the surrounding environment)
ARTICLE: The Evolution of Constellium Al-Li Alloys for Space Launch and Crew Module Applications
If the enclosure is 12" L x 12" W x 1" H, then both halves weigh 28.2 pounds, or 14.1 pounds per plate. We're going to gun drill channels in the material and seal (welded-in plugs) SCCO2, so it'll be slightly less than that. I suppose we could attempt some kind of diffusion bonding, although MIG welding followed by machining the surface flat should also work.
TWI DEVELOPS NEW ALUMINIUM DIFFUSION BONDING TECHNIQUE
Chassis Coating: Nickel-Boron - to provide high surface hardness, resistance to corrosion from handling with human hands, as well as Nickel-Boron's ability to efficiently transfer heat
UCT Coatings - EXO Nickel Boron Coating
Chassis Sealing: flat Viton gasket - the sealing surfaces of the chassis will be machined flat, with very high tolerances
Chassis Clamps: MagSwitch style permanent magnet twist locks (there's a new magnetization technology that can imprint multiple magnetic poles onto the surface of a permanent magnet, such that a quarter turn will magnetically attract or repel two permanent magnets)
Circuit Board: PEEK plastic plate (a suitably thick pure plastic plate, not a thin fiberglass-reinforced composite) - selected for temperature resistance over the design operating temperature range, as well as its use in current applications for circuit boards used in satellites or for downhole tools used by the oil and gas industry, where temperature extremes are the norm
Circuit Races: ETP Copper with spring-loaded Gold-plated connectors, molded into the PEEK plate - all QFN type surface-mount components will use mechanical compression / clamping of the components (the quarter-twist-to-lock pemanent magnets around the periphery of the chassis will supply the clamping load, requiring a small specialized magnetic tool to lock / unlock) between the upper and lower Aluminum chassis halves, into the circuit races embedded into PEEK plate to assure positive electrical connection
Component Form Factor: flat no-lead components, primarily simple and robust semi-conductors, such as MOSFETs and SRAMs; if possible all components should be the same height so that they're all equally compressed into the circuit races embedded in the PEEK plate (the general idea is that all electronics components are individually replaceable units that do not use any soldering processes, so all serviceable components can be reused; after you remove the chassis shell, you can lift out the PEEK board and dump all your components into a bag, if necessary)
Thermal regulation - a series of Pu238 GPHS modules would supply thermal input "survival power" for operation in mildly cryogenic environments, using the chassis shell embedded SCCO2 heat pipes for thermal power transfer
Input / Output: electrical power, sensor input, operator input / output provided by mag-safe type connector ports embedded into the PEEK plate
Power Supply: Board-integrated power supply that uses 440V 6-phase AC power, as supplied by the SCCO2 gas turbines used by the nuclear reactors or solar thermal
What's not present: no fans or coolant pumps, no VLSI microchips that are overly-sensitive to radiation effects or ESD; all semi-conductors should be rad-hard by process, as well as rad-hard by design
I've changed my mind on attempting to operate the electronics at higher voltages. It can technically be done, and is routinely done by power MOSFETs, but it's also very wasteful of electrical power. If you ever need to rely upon this computer for critical functions, you may also be operating in a power-constrained environment. I fully expect that these machines will be in the 60 pound range, same as some of the massive old Sun Workstations. No special consideration will be given to where these machines will be located. It should be feasible to put one of these devices in a habitation module, buried under the habitation module, in a bucket of cold brine outside, or to simply prop it up on a rock sitting on the surface of the moon in full Sun or full darkness. These machines must also tolerate vibrations from being vehicle mounted and operated, since some of them will be, as well as falling off of a moving vehicle or being run over or backed into by the vehicle, without serious damage. I expect that all of those events will happen, given enough time. Only egregiously bad siting that nobody should expect a computer to survive should render a computer inoperable.
We do need a general purpose design that can regulate life support equipment such as CO2 scrubbers, operate a software-defined radio, operate a battery or gas turbine powered vehicle, operate construction-related equipment such as drills, and operate scientific equipment such as radars or lasers or cameras. The computer will have limited graphics capability, but it must support output to a display, input from a keyboard or track ball, transfer of files between machines. For archiving purposes, we can also have purpose-built solid state file servers embedded into Tungsten Carbide balls (only $70 for a 1" diameter ball) to prevent GCRs from scrambling the stored data. The most modern storage chips can cram about a terabyte of data into a chip smaller than your finger nail, so the chip would be potted in the center of the ball and a small hole drilled for a data mag-safe data cable. We drill them out for use as heavyweight shifter knobs, so we may as well use them as radiation shielding for archived data storage on Mars (that superficially seems like a better use of the material). I propose that these things also be used as miniature bowling balls for use with miniature plastic bowling pins or for juggling or possibly for space suit shot-put events, to provide crew entertainment. You're not going to damage them, so you may as well have fun with your data storage while you're at it. You can think of that as "computer games" for people who can't run the latest and greatest hardware, on account of the excess power consumption.
Edit: Instead of solid plates, we could change the form factor to the lengthened Tesla / Saft Lithium-ion battery I proposed for our battery powered bulldozer. By having a handful of chips mounted in each PEEK "hockey puck" sandwiched between Aluminum spacers / heat spreaders, a simple screw cap could seal the entire unit. We could put them into bore holes not occupied by batteries in the battery blocks I proposed using. The battery block would take care of thermal management, provide additional radiation shielding, and supply electrical power to the computer. If you're gonna have batteries all over the place, that's a logical place to put your life support computer as well.
Offline
tahanson43206,
First, Happy New Year!
Second, I just finished reading all of Marc Abel's paper and now have a much better grasp of what he's proposing. Briefly, it appears to put far more of the onus on generating correct output on the operating system by using processor logic that is significantly less affected by malformed input or arithmetic overflow, rather than on the programmer attempting to write programs run by the operating system. Determining what proper output is can be non-trivial for all but the most trivial of software programs, yet all compilers, operating systems, and device drivers are non-trivial programs to write, thus evaluating the relative security of the foundation of the computing environment that almost all ordinary users interact with is non-trivial, as well as utterly impractical for the majority of users. He does make a well-reasoned argument as to why ignoring aberrant operation was a serious security flaw. Much deeper than security, my own "read between the lines" of what he describes is a very compelling argument as to why simply testing a program to determine that the code operates as intended, even in the absence of any malicious activity, is such a difficult and time consuming task.
Anyway, what he's working on seems like a natural fit for minimal complexity operating systems that monitor sensor inputs, direct hardware to take corrective action when necessary (add more O2, run the CO2 scrubbers, provide numerical feedback to the user indicating low O2 or high CO2, etc). These sorts of systems don't benefit much from breathtaking throughput speeds, but they do require easily testable boundary conditions that are not likely to be affected by untestable aberrations caused by the improper operation of some other simultaneously running program. For example, if a bug in a simultaneously executing SDR program causes an arithmetic overflow, under no circumstances should that ever affect the processing of sensor input monitoring related to life support functions. The way most firmware and hardware function on most general purpose processors would make such a determination impossible. The way current operating systems would deal with such a problem, if detected, is to halt all processing, a completely unacceptable result when that operating system monitors and regulates life support systems.
Anyway, I'd be very interested to know what additional progress he's made on the ALU designs described in his paper. Has he actually built and does he have any testing behind his three-layer ALU design?
The recent hacking of critical infrastructure has illustrated how important his proposed processor would be, not only to the military or their defense contractors, but also to running the businesses the fuel our economy. The ecosystem of software making use of this fundamentally different CPU and OS also has to be built. I'm not sure what would have to happen before the damage from insecure computer systems is deemed intolerable, but it would be best to have a practical solution ready to implement after that determination has been made.
Offline
for kbd512 re #340
Thanks for this thoughtful response to Marc's work! I will forward the link to your post immediately.
This is the kind of feedback that (I am hoping) will be appreciated and welcome.
Marc is actively (let's say "open to" rather than seeking) grants to continue developing the hardware.
I am happy to serve as a conduit if anyone in the readership wants to discuss this possibility with him.
(th)
Offline
Sounds more like working in assembly language or processor operands. We called them bit chasers in the early days. The small PIC devices are designed around small computing.
Offline
For SpaceNut re #342
Thank you for your continued interest in Marc's project ... This is a 36 bit hardware design
My guess (not having discussed this with Marc) is that the software that operates at the hardware level will be written (as you suggest) in assembler, but no one will ever see that. Modern computers are designed to provide a virtual environment on top of which any operating system can be installed, so that the programmers who work with languages that run on their OS of choice will be able to do so.
One detail that I recall from the recent phone conversation is that Marc is planning to use static chips for memory. He says that they cost more than dynamic ram, but are far more reliable.
As a reminder, this processor design is intended to be published as a Open Source document when it is finished, so (theoretically) anyone with access to the chips could build a system. The Virtual OS that runs on it will probably be worth paying a few dollars to acquire.
(th)
Offline
Sounds like this is what is called the bios or boot up or. Most computers still use an initialization routine after power is applied. Some of this is to set up the processor while getting peripherals going.
Offline
For SpaceNut re #344
Thank you for an interesting comparison! The modern virtual OS environments are designed to emulate well known processors, even though the underlying hardware is dramatically different. Microsoft Azure (as just one example) offers you (the customer) the option to set up a Linux OS, a Microsoft OS (of various kinds) and probably others I'm not aware of.
The underlying hardware can be something completely different.
The virtual overlay provides such a perfect emulation that the OS you install ** thinks ** it is running on an Intel chip or an AMD chip or whatever your particular selection needs.
Here is a snippet from Google >> Wikipedia that offers a glimpse of what is now fairly routine ...
VirtualBox
Hosted Hypervisor
VirtualBox
Oracle VM VirtualBox is a free and open-source hosted hypervisor for x86 virtualization, developed by Oracle Corporation. Created by Innotek, it was acquired by Sun Microsystems in 2008, which was in turn acquired by Oracle in 2010.
Wikipedia icon
Wikipedia
Developers: Oracle · Innotek
Written in: C, C++, x86 Assembly, Python
Data from: Wikipedia · Freebase
Suggest an edit
(th)
Offline
Yes emulators can be created for chip sets and even for the OS such as for Android to run on a Windows platform.
Offline