The 8051: ARM’s nemesis on IoT space?

The 8051: ARM’s nemesis on IoT space?

 

http://forum.eetasia.com/BLOG_ARTICLE_19114.HTM?click_from=8800109483,8723946550,2014-01-28,EEOL,ARTICLE_ALERT

 

At present,

8- and 16bit microcontrollers dominate the market for connectivity to wirelessly untethered embedded devices

and sensors (i.e., the Internet of Things), and I am not convinced that 32bit MCUs,

no matter how cheap they become, will ever match the ubiquity of their smaller brethren in such designs.

 

In this blog by Jack Ganssle,

he reviews all the reasons why 32bit CPU (and ARM) dominance is likely but not certain,

and warns that although ARM Ltd. is in top-dog position at this time, it may not always be so.

 

He outlined a number of reasons why it could be blind-sided by some unforeseen nemesis,

probably an open source microcontroller architecture.

 

One reason, he suggests, is that ARM collects a tax on each part sold,

and even if all other costs were zeroed out,

these devices can’t compete with 8- and 16bitters in the most price sensitive applications.

 

I agree with him on this, and I think the challenge to ARM’s dominance is most likely to occur

in the context of the design challenges for adding Internet Protocol to IoT devices.

8051: IoT’s dark horse ARM nemesis?

I suggest that ARM’s nemesis in the IoT space could be the 8051 microcontroller,

which has been "hiding in plain sight" for at least five years.

 

Discontinued by Intel in 2007, other companies have kept it alive in various incarnations

and it is now widely supported.

 

In the Internet of Things segment,

low power requirements are forcing ARM and its 32bit MCU licencees to try out things

such as near threshold voltage Cortex-M0 MCUs

to allow 32bitters to compete with existing 8- and 16bit alternatives.

 

Intel with its new Atom-based X1000 Quark SoC seems embarked on a similar strategy.

 

Frankly, though, these efforts remind me of American auto makers in the 1970s when the various Arab oil crises hit.

Fuel efficient autos from Japan and Europe were stealing the market

and all U.S auto firms could do was try to re-engineer their behemoths.

 

They got more fuel efficient, but only at the cost of performance.

But for an alternate CPU to challenge the ARM dominance

I believe it must meet several conditions:

* As Jack emphasizes, it must not be saddled with the licensing fees that ARM imposes,
* it must be well-known and used widely,
* it must already have a base of developers who at some stage became familiar with the architecture,
* if not free, it must be broadly available from a number of suppliers in chip form,
* its features and capabilities must be continually upgraded, and,
* there should be a wealth of hardware and software development tools available.

 

The 8051 meets all these conditions.

The first argument that is raised against my thesis is not against the 8051 alone,

but against all 8- and 16bit MCUs in the IP-driven Internet of Things.

 

Full support of the IPv6 protocol, the argument goes, requires a 32bit architecture,

especially if each of the ‘things’ has its own unique URL identifier.

 

Not true. An article written for in 2003 shortly after IPv6 became available

– "IPv6 on a microcontroller," by Robert Muchsel

—illustrated that it was possible to support IPv6 on an 8bit MCU.

 

Also, Adam Dunkels in a later article "Towards TCP/IP for wireless networks,"

put more nails in the coffin of the idea that

only 32bit CPUs can handle the requirements of a full IP-enabled "Internet of Things" implementation.

 

For some IoT applications, full support of the TCP/IP stack is not necessary

and often gets in the way if deterministic responses are necessary.

 

For that, the protocol’s UDP subset requires

many fewer resources and does not get in the way of the MCU’s real-time duties.

Indeed, if you look closely at some of the implementations of the 6LoWPAN protocol added to IPv6 in 2007,

they rely exclusively on UDP.

 

A second argument against my hypothesis

I hear when I bring it up in conversations with developers is that the 8051 is an OLD architecture.

 

In its original form, it does not have instruction set and hardware features that more modern MCUs have,

not only 32bitters, but more recent 8-and 16bit devices from Microchip, Texas Instruments,

Freescale, ST Micro and several Japanese offerings as well.

 

One thing that makes some embedded developers uncomfortable is that

is uses the old style complex instruction set (CISC) architecture,

diametrically counter to most of the reduced instruction set (RISC) format

used on more recent MCUs.

 

Also, the 8051 was designed using a strict Harvard architecture,

so it can only execute code fetched from program memory.

 

Nor does it have any instruction to write to program memory,

and is unable to download and directly execute new programs.

 

But in the Internet of Things, where security will be paramount,

this and the strict Harvard architecture has the advantage of making designs

immune to most forms of malware.

 

But despite its "limitations," the 8051 is alive and well.

Current vendors of MCS-51 compatible processors

include more than 20 independent manufacturers.

 

Several derivatives integrate a digital signal processor (DSP).

Also, several companies offer MCS-51 derivatives as IP cores for use in FPGAs and ASICs designs.

Even if the 8051 has features that most embedded developers would disdain,

that is not necessarily a barrier to its acceptance,

if the recent history of Linux in embedded development can be taken as a guide.

 

When it was originally introduced, Linux was judged by most embedded developers

as totally inappropriate for microcontroller applications that require minimal memory footprint

and real time, deterministic interrupt response times.

 

But as time went on real time enhancements were made and even more important,

as noted in "The work of Linus Torvalds",

Linux was (and continues to be) adopted by universities and technical schools

around the world as the platform they use to introduce their engineering

and computer science students to the principles of operating system design and use.

 

The reason: as an open source OS without commercial licensing fees,

it could be used at no cost in courses as long as it is not used for commercial development.

 

For that reason, embedded developers who learned about operating systems with Linux

try to make it fit in their designs before they consider a proprietary OS.

 

The same is true of the 8051,

which in its original form is without the licence fees associated with the ARM architecture

and more than one of ARM’s more recent 8- or 16bit MCU competitors.

 

Do a search using Advanced Google and you will find that, as with Linux,

in courses for engineering students on microcontroller basics,

the 8051 is the platform of choice by at least a 3-to-1 ratio (my own estimate) compared to other 8- and 16bit MCUs.

And when you compare its use as an MCU teaching platform in universities

with that of the ARM 32bit MCUs, the ratio is even more lopsided.

 

Another argument is that there is no clear migration path from the 8051 to the new ARM and MIPS 32bit architectures.

Not so. Many of the 8051 suppliers are also 32bit ARM (or MIPS) MCU licensees.

In addition to migration paths from their own proprietary 8- and 16bit MCUs to the ARM,

they have created paths for the 8051.

 

And don’t forget the MCS-151 upgrade introduced by Intel before it abandoned the architecture.

It is also supported and enhanced by a number of other chip vendors.

It is a multipurpose 8/16/32bit MCU architecture with a 16MB (24bit) address space,

and in its original form had a six times faster instruction cycle.

It can perform as an 8bit 8051 OR as a 32bit ALU with 8/16/32bit wide data instructions.

 

As to tools, there again the 8051,

both in its original form and in its more recent enhanced forms,

seems to have a wealth of choices available from the chip vendors who continue to support it.

There are several compilers and other tools available from major software tool suppliers

such as IAR Systems, Keil, and Altium Tasking, who continuously release updates.

 

Finally, what gives me the feeling that the requirements of the various M2M

and wireless applications could be the place that that the 8051 could re-emerge as a major player

and candidate for Ganssle’s ARM nemesis are the increasing number of articles

on these topics presented at professional technical conferences a

nd published in various journals in which the 8051 is the basic building block.

 

In various web surveys I have done of wireless sensor,

machine to machine and IoT design activity,

the 8051 ranks right up there with designs based on an ARM CPU

or one of the several other 8-/16bit MCUs.

 

Some of these articles and papers are from the same universities

and technical institutes at which the 8051 is used by introductory engineering classes

on microcontroller architectures.

 

Despite my arguments so far,

I have to admit that the 8051, even in its MCS-151 enhanced configuration,

is a long shot to be competitive with either the ARM 32bit Cortex M0

or any of the various more recent 8- and 16bit MCUs in the IoT space.

 

But given the embedded industry’s experience with Linux,

it is not something that should be ruled out.

 

About the only scenario that is more of a long shot is

if ARM Ltd. developed a 16bit version of its ARM Cortex-M0 MCU family

based on the very CISCy Thumb 1 or Thumb 2 Instruction Set Architecture subsets

it introduced to make its 32bitters more attractive to 16bit MCU developers.

 

On the face of it, this would be an elegantly simple solution.

But who would be interested in licensing it from ARM?

Certainly not any of the many ARM licencees who also have their own 8- and 16bit MCU families, the 8051, or both.

——————————————————————————————

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *