Re: Java iButton from Dallas Semiconductor

New Message Reply About this list Date view Thread view Subject view Author view

Ryan Lackey (rdl@MIT.EDU)
Tue, 31 Mar 1998 00:33:44 EST


> I got one of the ibuttons too. Just from the demos, I have some serious
> reservations about it.

I was somewhat privy to information about the demos before they happened.
I was quite unimpressed by the demo concepts. I don't hold this against
the technology, though.
>
> First of all, the first thing you do in the demos at JavaOne was enter
> your real name and phone number. While that might night be inherent to the
> way the iButton works, it shows a serious lack of understanding of the
> meaning of identity on the part of people making the demos.

This is distinctly non-inherent to the protocol. It's just like a public
key certificate -- lots of places want real names and phone numbers. The
only thing that is "real" is that an ibutton has a unique serial number
(mostly useless for truly secure systems due to Eve) and the capacity to
generate and store secret keys inside its security perimeter.

Don't write off the hardware just because some people who aren't cypherpunks
wrote some lame and last-minute demos.
>
> Then there's the matter of the ring not knowing what it's plugged into -
> at some point it has to download and run software, how does it know when
> the machine it's plugged into has that authorization? DOS attacks on rings
> are funny now, but wouldn't be if they were ever put into serious
> production. Downloading software once and never again would work, but even
> then the ring has no way of knowing if the device it's plugged into has
> authorization to send it messages to encrypt. Further, it doesn't seem to
> keep a permanent record of all things that it's encrypted.

It's a very well studied problem -- read the IBM/etc. papers on secure
hardware.

My solution: hardware is shipped under armed guard or tamper-evident seal from
the end of the production line to the secure facility. At the secure
facility, it is loaded with firmware into irrevokable memory. Optionally,
if the system has a well-defined full-reset-to-factory-state, you can
handle this by shipping the tamper-evident modules to end users, relying
on their trusted hardware to download pgp-signed software from you and
load it into their modules, after a full reset. In the case of the
i-button, I believe the best solution is to have the buttons shipped to
a secure loading/staging facility, kept under seal until they're actually
loaded with the appropriate software.
>
> More fundamentally, I question the judgement of trying to use a product
> with a widely-distributed development kit for crypto purposes. It would be
> far too easy to take a ring someone else had developed and modify it
> slightly to have a back door, then swap it for someone's ring before they
> even opened it. Security is only as strong as it's weakest link, and
> there's no point in bothering with serious crypto if there's a hole that
> big.

Um, this is the security-through-obscurity argument, and it fails.

What about using a general purpose computer for crypto purposes? I don't
know how many *millions* of people have access to a c compiler these days.
Any one of them could write a trojan horse version of PGP, or of gcc, or
whatever.

I mostly don't believe in non-armed-guard non-explosive tamper-resistant
physical seals. However, if I develop an application to ship 1 000 000
i-buttons, I can get Guido, Tim May :) and myself to drive down to their
plant, load them into a truck, and shoot anyone who gets anywhere near. Then,
I load the rings in such a way that the initialization routine requires
a challenge-response interaction with the user, and for the ring to present
to the user a secret previously exchanged between the user and the ring
provider (perhaps just the serial number of the ring). The problem can
then be reduced to a problem of not having a $50 package stolen (otherwise
you need to resend it, and you're out $50), and of securely communicating
an authentication code from the ring provider to the end user. These
are solved problems (FedEx; PGP or simple DH)
>
> Many of the above problems are reasonably fixable, but I didn't see any
> evidence that the people doing work on the iButton are even aware of them.

Um, I am doing work on the i-button. I am aware of them. I've actually
worked up a full threat model for i-button systems, and have been doing
development to keep that in mind.
>
> I'm actually quite excited about small specialized cryptographic devices,
> and view them as being essential for widespread use of strong cryptography
> in many applications, but I think these devices should be regarded as
> specialized dongles with specific well-defined interfaces, not
> general-purpose computers in a fancy little box.

The i-button loses only in that it doesn't have direct user i/o. As
a consequence, PINs entered into it plaintext displayed to the user, etc.
are all vulnerable. However, in a lot of systems, keeping long term secrets
secret is a tremendous advantage over the current system.

This is also a tractable problem.
>
> -Bram
>
Another security problem is, of course, do you trust Dallas Semiconductor.
I mostly trust them, but before I commit a huge amount of money to an
i-button, I want to buy 500 of them from random lots and disassemble them
with an electron microscope and nitric acid :)

(I trust their hardware, not really their software. I also don't necessarily
trust design decisions. I just trust them to not be keeping secret keys
for internal use, and mostly trust them to not have sold out to the NSA)

-- 
Ryan Lackey
rdl@mit.edu
http://mit.edu/rdl/		


New Message Reply About this list Date view Thread view Subject view Author view

 
All trademarks and copyrights are the property of their respective owners.

Other Directory Sites: SeekWonder | Directory Owners Forum

The following archive was created by hippie-mail 7.98617-22 on Fri Aug 21 1998 - 17:16:25 ADT