Showing posts with label techie geekdom. Show all posts
Showing posts with label techie geekdom. Show all posts

Monday, June 24, 2024

Adobe updates License terms to be less douchy

The key word here is "less":

Adobe has promised to update its terms of service to make it "abundantly clear" that the company will "never" train generative AI on creators' content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives' projects unless they immediately accepted them.

...

On X (formerly Twitter), YouTuber Sasha Yanshin wrote that he canceled his Adobe license "after many years as a customer," arguing that "no creator in their right mind can accept" Adobe's terms that seemed to seize a "worldwide royalty-free license to reproduce, display, distribute" or "do whatever they want with any content" produced using their software.

...

Adobe's design leader Scott Belsky replied, telling Yanshin that Adobe had clarified the update in a blog post and noting that Adobe's terms for licensing content are typical for every cloud content company. But he acknowledged that those terms were written about 11 years ago and that the language could be plainer, writing that "modern terms of service in the current climate of customer concerns should evolve to address modern day concerns directly."

...

"You forced people to sign new Terms," Yanshin told Belsky on X. "Legally, they are the only thing that matters."

The original story is here.

I'm not sure this brouhaha is over.

Monday, June 17, 2024

55 year old bug fixed

This may be the oldest bug fix in history, in the 1969 "Lunar Lander" text based computer game.  I really enjoyed that, back in the 1970s.


And yes, it printed out on paper.  The story is very cool:

In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.

...

Fast forward to 2024, when Martin—an AI expert, game developer, and former postdoctoral associate at MIT—stumbled upon a bug in Storer's high school code while exploring what he believed was the optimal strategy for landing the module with maximum fuel efficiency—a technique known among Kerbal Space Program enthusiasts as the "suicide burn." This method involves falling freely to build up speed and then igniting the engines at the last possible moment to slow down just enough to touch down safely. He also tried another approach—a more gentle landing.

"I recently explored the optimal fuel burn schedule to land as gently as possible and with maximum remaining fuel," Martin wrote on his blog. "Surprisingly, the theoretical best strategy didn’t work. The game falsely thinks the lander doesn’t touch down on the surface when in fact it does. Digging in, I was amazed by the sophisticated physics and numerical computing in the game. Eventually I found a bug: a missing 'divide by two' that had seemingly gone unnoticed for nearly 55 years."

Very cool story.

Wednesday, June 12, 2024

Alternative to Adobe Photoshop

OldNFO points out a post at Lawrence's place about how Adobe has changed their terms of service.  Basically, you have to agree that they own all the work you create with their software, in order to get access to your work that you created on their software.

Sweet. 

Now IANAL, and so don't know how the (inevitable) Class Action lawsuit(s) will play out.  However, I am an enthusiastic user of The GIMP, a free (as in speech) Open Source Photoshop-alike application.

Yes, it has a Photoshop-worthy learning curve, but it is full featured and powerful, cross platform, and free.  No weird terms of service getting changed at midnight.

If you're looking for an alternative to Photoshop, I highly recommend this.

Friday, April 1, 2022

April Fools tech humor

Back in the early 90s I was a nerd [pauses to let shocked gasps die down].  There was a couple year period where I read every single one of the Internet specs that were released.  These documents are rather strangely named "Request For Comment" or RFCs.  Since it was my job to know nerdy Internet stuff then, I read 'em all, probably a couple a week back then.

Well every April Fools Day there would be a joke RFC.  There's a pretty good Wikipedia page that lists them.  Here's a recent example: RFC 8565, Hypertext Jeopardy Protocol.  The Abstract reads:

The Hypertext Jeopardy Protocol (HTJP) inverts the request/response semantics of the Hypertext Transfer Protocol (HTTP). Using conventional HTTP, one connects to a server, asks a question, and expects a correct answer. Using HTJP, one connects to a server, sends an answer, and expects a correct question. This document specifies the semantics of HTJP.

Pretty funny right there, in a very nerdy way.  But one that I remember from way back in the day was RFC 1149, Standard for the transmission of IP datagrams on Avian Carriers.  Basically it was sending Internet messages by carrier pigeon.  We yuked this up around the coffee mess.

Well, it turns out some nerds actually implemented this - they built a working system that used pigeons:

Finally, rfc 1149 is implemented! On saturday 28th of april 2001, the worlds very first rfc 1149 network was tested. The weather was quite nice, despite being in one of the most rainy places in Norway.

The ping was started approximately at 12:15. We decided to do a 7 1/2 minute interval between the ping packets, that would leave a couple of packets unanswered, given ideal situations. Things didn't happen quite that way, though. It happened that the neighbour had a flock of pigeons flying. Our pigeons didn't want to go home at once, they wanted to fly with the other pigeons instead. And who can blame them, when the sun was finally shining after a couple of days?

But the instincts won at last, and after about an hour of fun, we could see a couple of pigeons breaking out of the flock and heading in the right direction. There was much cheering. Apparantly, it WAS our pigeons, because not long after, we got a report from the other site that the first pigeon was sitting on the roof.

Read the whole glorious thing here.  Linux nerds FTW!

Thursday, April 22, 2021

Microsoft and Linux, sitting in a tree ...

 K-I-S-S-I-N-G:

Microsoft this week released a preview version of Windows Subsystem for Linux GUI, or WSLg, which provides a way to run Linux applications with graphic interfaces on Windows devices.

...

"You can use this feature to run any GUI application that might only exist in Linux, or to run your own applications or testing in a Linux environment," explained Craig Loewen, program manager for the Windows Developer Platform at Microsoft, in a blog post.

Man, the tech world is getting weird.  Eric Raymond had a pretty interesting take on this last year:

Azure makes Microsoft most of its money. The Windows monopoly has become a sideshow, with sales of conventional desktop PCs (the only market it dominates) declining. Accordingly, the return on investment of spending on Windows development is falling. As PC volume sales continue to fall off , it’s inevitably going to stop being a profit center and turn into a drag on the business.


Looked at from the point of view of cold-blooded profit maximization, this means continuing Windows development is a thing Microsoft would prefer not to be doing. Instead, they’d do better putting more capital investment into Azure – which is widely rumored to be running more Linux instances than Windows these days.

Interesting world we live in.

Tuesday, November 26, 2019

So we're finally out of Internet Addresses

Well, IPv4 addresses.  I think that this time it's for sure.  Maybe.

It doesn't look like it's making much - or any - difference.  For sure, nobody wants to run IPv6 - no doubt that most vendors "support" IPv6 only if you also run IPv4.  And at no time do their fingers leave their hands ...

The good news is that if everyone had to switch to IPv6, it would probably go decently well.  Sure a few things would break but they'd get fixed toute suite.  I mean, it's the Internet.  You bet they'd fix it.

The whole thing has been billed as an Earth Shattering Kaboom but has turned out to be a damp firecracker.

Sunday, June 30, 2019

George Antheil - Ballet Mécanique

The early years of the 20th century was a very strange time in the art world.  Strangest of them all were the dadaists - surrealists and absurdists of an almost Monty Pythonesque stature.   Ballet Mécanique was a film and a musical score from 1923 that was perhaps the height of the Dada movement.



The name comes from the replacement of human dancers with industrial machines and propellers, and replacement of the orchestra with a brigade of player pianos.  Like I said, surreal and absurd.

But what is stranger is the unlikely friendship that the composer struck up with an actress.   George Antheil ended up collaborating with Hedy Lamar on an idea that would receive a US Patent.  I wrote about this story on this day ten years ago.

U.S. Patent #2,292,387

At State U, I studied Electrical Engineering (among other things). One thing that we studied was frequency hopping radio, which is a cool way to make your conversation hard for someone to eavesdrop on. It works kind of like this:

You and I both tune our radios to a particular frequency, say WRDK-Redneck FM. I speak the first word of the sentence, "Lever".

You and I both tune our radios to a different drequency, say WLTE-Lite FM. I say the next word, "guns".

We tune to a third frequency, for a third word "are", then a fourth "sweet."

If the adversary doesn't know the order of frequencies and the timing of the changes, he'll never know our secret: Lever guns are sweet (unless he reads this blog, of course).

What I didn't learn at State U was who invented this technique, and got a patent from the US Patent and Trademark Office: Hedy Lamarr:
The idea was ahead of its time, and not feasible owing to the state of mechanical technology in 1942. It was not implemented in the USA until 1962, when it was used by U.S. military ships during a blockade of Cuba after the patent had expired. Neither Lamarr nor Antheil, who died in 1959, made any money from the patent. Perhaps owing to this lag in development, the patent was little-known until 1997, when the Electronic Frontier Foundation gave Lamarr an award for this contribution.
From the EFF's award:
Actress Hedy Lamarr and composer George Antheil are being honored by the EFF this year with a special award for their trail-blazing development of a technology that has become a key component of wireless data systems. In 1942 Lamarr, once named the "most beautiful woman in the world" and Antheil, dubbed "the bad boy of music" patented the concept of "frequency-hopping" that is now the basis for the spread spectrum radio systems used in the products of over 40 companies manufacturing items ranging from cell phones to wireless networking systems.



Pretty darn impressive, especially given how open the 1940s scientific community was to contributions from women.

And so, I hereby pledge my allegiance to Hedy Lamarr:

Sorry, I meant "Hedley" ...

Thursday, June 6, 2019

Interesting - telecom fraud is down

Telecom fraud is AFAIK the oldest form of computer fraud.  I hadn't been blogging even a month when I posted about a fellow who called himself Cap'n Crunch because he used the whistle in the cereal box to steal free long distance service.  It seems that the whistle put out a tone at exactly 2600 Hz which happened to be the tone that the phone switches used for signaling commands.

Well, it turns out that telecom fraud is down, because the people who would steal long distance time now use Skype.  Of course, phone spam is way up because it's easy for the Bad Guys to fake their caller ID.  The industry is working on fixing this but it will take years before it is deployed and nobody is entirely sure how effective it will be.

Tuesday, April 23, 2019

I don't think that I want to fly on a Boeing 737 Max

There is a great analysis of the 737 Max failures at IEEE:
The engines on the original 737 had a fan diameter (that of the intake blades on the engine) of just 100 centimeters (40 inches); those planned for the 737 Max have 176 cm. That’s a centerline difference of well over 30 cm (a foot), and you couldn’t “ovalize” the intake enough to hang the new engines beneath the wing without scraping the ground.
The solution was to extend the engine up and well in front of the wing. However, doing so also meant that the centerline of the engine’s thrust changed. Now, when the pilots applied power to the engine, the aircraft would have a significant propensity to “pitch up,” or raise its nose.
Larger engines were critical to the design, because that's how you get efficiency (read: lowest fuel cost).  The old airframe (fuselage and wings) were critical to the design because if you do a major change to the plane then the FAA certification is no longer valid and you need to (very expensively) re-certify the plane.
In the 737 Max, the engine nacelles themselves can, at high angles of attack, work as a wing and produce lift. And the lift they produce is well ahead of the wing’s center of lift, meaning the nacelles will cause the 737 Max at a high angle of attack to go to a higher angle of attack. This is aerodynamic malpractice of the worst kind.
This is really, really bad.  Consider a plane that is about to stall.  One approach (especially with large, powerful engines) is to apply power to increase air speed.  On the 737 Max, this will cause the nose to pitch up and bring on the stall.  The design is inherently unstable in this situation.
Let’s review what the MCAS does: It pushes the nose of the plane down when the system thinks the plane might exceed its angle-of-attack limits; it does so to avoid an aerodynamic stall. Boeing put MCAS into the 737 Max because the larger engines and their placement make a stall more likely in a 737 Max than in previous 737 models.
When MCAS senses that the angle of attack is too high, it commands the aircraft’s trim system (the system that makes the plane go up or down) to lower the nose. It also does something else: Indirectly, via something Boeing calls the “Elevator Feel Computer,” it pushes the pilot’s control columns (the things the pilots pull or push on to raise or lower the aircraft’s nose) downward.
This sounds sensible, although kludgy.  The problem is that the Elevator Feel Computer has a really powerful actuator; pilots will struggle to overcome it and push the nose down.  It seems that this wasn't a bug, but a feature of the design.  But here's the crux of the problem:
In the 737 Max, only one of the flight management computers is active at a time—either the pilot’s computer or the copilot’s computer. And the active computer takes inputs only from the sensors on its own side of the aircraft.
When the two computers disagree, the solution for the humans in the cockpit is 
to look across the control panel to see
 what the other instruments are saying and then sort it out. In the Boeing system, the flight
 management computer does not “look 
across” at the other instruments. It 
believes only the instruments on its side. It doesn’t go old-school. It’s modern. It’s software.
This means is that if a particular angle-of-attack sensor goes haywire—which happens all the time in a machine that alternates from one extreme environment to another, vibrating and shaking all the way—the flight management computer just believes it.
There's no redundancy.  Let me elaborate on that:

There's no redundancy.
There's no redundancy.
There's no redundancy.
There's no redundancy.


Holy cow, this is the dumbest design I've ever heard of, and I'm not even an aeronautical engineer.  This smells of "we found this out late in testing and had outsourced software developers write us some code in a hurry to fix it".  I don't know if that's how things happened but I've seen this more than once or twice in my career.
It gets even worse. There are several other instruments that can be used to determine things like angle of attack, either directly or indirectly, such as the pitot tubes, the artificial horizons, etc. All of these things would be cross-checked by a human pilot to quickly diagnose a faulty angle-of-attack sensor.
In a pinch, a human pilot could just look out the windshield to confirm visually and directly that, no, the aircraft is not pitched up dangerously. That’s the ultimate check and should go directly to the pilot’s ultimate sovereignty. Unfortunately, the current implementation of MCAS denies that sovereignty. It denies the pilots the ability to respond to what’s before their own eyes.
Like someone with narcissistic personality disorder, MCAS gaslights the pilots. And it turns out badly for everyone. “Raise the nose, HAL.” “I’m sorry, Dave, I’m afraid I can’t do that.”
There's no redundancy.
There's no redundancy.
There's no redundancy.
There's no redundancy.
So Boeing produced a dynamically unstable airframe, the 737 Max. That is big strike No. 1. Boeing then tried to mask the 737’s dynamic instability with a software system. Big strike No. 2. Finally, the software relied on systems known for their propensity to fail (angle-of-attack indicators) and did not appear to include even rudimentary provisions to cross-check the outputs of the angle-of-attack sensor against other sensors, or even the other angle-of-attack sensor. Big strike No. 3.
None of the above should have passed muster. None of the above should have passed the “OK” pencil of the most junior engineering staff, much less a DER.
That’s not a big strike. That’s a political, social, economic, and technical sin.
This is a long and detailed article and I've only excerpted key bits.  You should really read the whole thing because the situation is simply horrifying.  Boeing has destroyed their reputation.

I've written many, many, many times about design issues in Airbus' flight control software,, where the pilots become confused or the software freaks out and people die.  I always liked flying Boeing because their reputation that "the pilot is always in charge" was my strong preference - my whole career has been dealing with software failure, and my imagination is too active to ever be comfortable on an Airbus plane.

Well that has all changed after 737 Max.  It's not just that the pilot can't fly the plane now, it's this:
That’s because the major selling point of the 737 Max is that it is just a 737, and any pilot who has flown other 737s can fly a 737 Max without expensive training, without recertification, without another type of rating. Airlines—Southwest is a prominent example—tend to go for one “standard” airplane. They want to have one airplane that all their pilots can fly because that makes both pilots and airplanes fungible, maximizing flexibility and minimizing costs.
It all comes down to money, and in this case, MCAS was the way for both Boeing and its customers to keep the money flowing in the right direction. The necessity to insist that the 737 Max was no different in flying characteristics, no different in systems, from any other 737 was the key to the 737 Max’s fleet fungibility. That’s probably also the reason why the documentation about the MCAS system was kept on the down-low.
And so the pilots on the fatal flights couldn't figure out how to get out of the situation because Boeing intentionally did not tell them.  Allegedly.  This one will have to go through the courts but this very well may end up being the most expensive design mistake in history.

Wednesday, March 27, 2019

That was fun

As you'd expect, a meeting with readers of this blog led to a discussion of things techie: security and fault analysis, self-driving car failure modes, microbiology and genetics, software and coding, and, well, a lively discussion of cabbages and kings.  It was a blast.

I'd feel bad about all the geekery except that long time readers know what to expect.  If you're new here then that's one thing, but if you've been reading for years than that's on you.  ;-)

Friday, February 15, 2019

Self-Driving cars: unsafe at any speed

A Tesla on autopilot drove itself into a wreck.  The failure mode is interesting:
Yet another example came to light on Monday when a driver in North Brunswick, New Jersey wrecked his Tesla on a highway while the vehicle was in Autopilot mode. According to a report published by News 12 New Jersey, the driver said that the vehicle "got confused due to the lane markings" at a point where the driver could have stayed on the highway or taken an exit. The driver claims that Autopilot split the difference and went down "the middle", between the exit and staying on the highway.

The car then drove off the road and collided with several objects before coming to a stop. The driver claims that he tried to regain control of the vehicle but that "it would not let him".
Insty is skeptical, but I'm not.  This is exactly the kind of situation that you should suspect the software could handle badly: confusing input from signs or lane markers leading to a failure to navigate the car on a safe route.  It's not a software bug, it's a gap in the algorithm used to control the car.

I'm not sure that this is solvable, either.  The way software developers handle these "edge cases" is (a) ignore them if possible (I can't see Tesla being able to do that, or (b) write a special case condition that covers the situation.  The problem with the later option is that there can be hundreds of these special cases that need to be coded in.  That makes the software a huge bloated mass that nobody can really predict how it will work.  Validation becomes really hard and QA testing becomes essentially impossible.

And this is without postulating software bugs - this is all trying to make the algorithm suck less.  Of course, the more code you have to write, the more bugs you will have - remember that validation becomes really hard and testing well nigh impossible?  You'll have an unknown number of potentially fatal bugs that you probably won't know about.

Until we have a different type of computer (probably one that is not based on von Neumann architecture).  If you want to get really computer geeky (and I know that some of you do), automotive autopilot problems are almost certainly NP-Hard.  For non computer geeks that means if you want to code one of the then good luck - you're going to need it.

The bottom line: I have absolutely no intention to ever trust my life to software that is NP-Hard.  I know (and admire) many software developers, but this is flying too close to the sun.  Someone's wings will melt.

Tuesday, October 2, 2018

Maybe we are living the the future after all

It's the 21st century, and we have space probes going to asteroids:


And flying cars go on sale next month:



Hat tip: A Large Regular.



Sometimes the future is stupid, but sometimes it can be very futurish indeed.

Monday, April 30, 2018

Proposed "Universal Secure Backdoor" for iPhones isn't secure

Ray Ozzie is one of the technical giants of the computer era.  He was one of the team that created VisiCalc back in the 1980s.  He followed this by creating Lotus Notes (IIRC, this is still being sold by IBM two decades later which has to be some kind of record).  He was one of Microsoft's CTOs and took over as Chief Software Architect from Bill Gates himself.  He is responsible for Microsoft's Azure cloud initiative - if you haven't heard of this, it's a bet-the-company gamble on making Microsoft Office work as a cloud service.  It looks like it's going to save Microsoft's bacon.

That's quite a resume.  Ozzie has a new proposal out for a secure universal backdoor to allow Law Enforcement to unlock iPhones.  It's an excellent example of the way that great software engineers are spectacular failures as security engineers.

The Geek With Guns has background on how it's supposed to work:
Dubbed “Clear,” Ozzie’s idea was first detailed Wednesday in an article published in Wired and described in general terms last month.
[…]
  1. Apple and other manufacturers would generate a cryptographic keypair and would install the public key on every device and keep the private key in the same type of ultra-secure storage vault it uses to safeguard code-signing keys.
  2. The public key on the phone would be used to encrypt the PIN users set to unlock their devices. This encrypted PIN would then be stored on the device.
  3. In cases where “exceptional access” is justified, law enforcement officials would first obtain a search warrant that would allow them to place a device they have physical access over into some sort of recovery mode. This mode would (a) display the encrypted PIN and (b) effectively brick the phone in a way that would permanently prevent it from being used further or from data on it being erased.
  4. Law enforcement officials would send the encrypted PIN to the manufacturer. Once the manufacturer is certain the warrant is valid, it would use the private key stored in its secure vault to decrypt the PIN and provide it to the law enforcement officials.
Well, so what's the problem?  After all, the keys used to sign software are incredibly sensitive - if someone gains access to your keys they could sign all sorts of malware which people's computers would recognize as coming from you.  So keeping these keys on the same secure system solves the problem, right?  Well, no:
Yes, Apple has a vault where they've successfully protected important keys. No, it doesn't mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

Ozzie makes an assumption that makes sense only if you don't understand operational procedure.  Yes, we've secured the keys under Scenario A.  Ozzie spends absolutely no time at all showing how his Scenario B is similar to Scenario A.  Quite frankly, it's not even in the same ballpark.  A procedure that is performed a few dozen times a year can be handled quite well by a small group of highly security savvy people; the same procedure performed thousands of times a year simply cannot be.  I myself have been part of these teams, and there is a very high level of awareness of the implications of any screwup, and so team members need joint simultaneous access to the system to make it work.  Think of it as sort of like ICMBs where there are two launch keys that have to be turned simultaneously, with the locks on opposite sides of the room.  The Launch Operators have to work together to make it happen.

That works for a few launches, but couldn't possibly work for a dozen a day or more, which is what Law Enforcement would want.  The system will collapse - or be intentionally subverted - a thousand different ways:
We have a mathematically pure encryption algorithm called the "One Time Pad". It can't ever be broken, provably so with mathematics.

It's also perfectly useless, as it's not something humans can use. That's why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad's on my grandfather's knee -- he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).

The same is true with Ozzie's scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don't know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the "trusted Apple employee" can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren't.
Nope.  This isn't about hardware or software, it's about wetware (sometimes called "peopleware").  People, as we all know, are prone to make mistakes or to be corruptible.  And this is where the Geek With Guns makes his point:
What’s noteworthy in regards to this post is the fact that nowhere does the Fourth Amendment state that measures have to be taken to make information easily accessible to the government once a warrant is issued. This omission is noteworthy because a lot of the political debates revolving around computer security are argued as if the Fourth Amendment contains or implies such language
We are often asked "Why do you need an AR-15?"  The question implies that we need to justify ourselves to some government authority and get permission before we can do something.  You could easily rephrase the question to "Why do you need a phone that keeps your secrets from everyone"*

The answer to both questions, of course, is identical.  Because fuck you is why.

* Assuming you can get one of these today.  Which you can't.  But while Law Enforcement could indeed collect all your data from the companies who collect it, it would be a royal pain in the behind for them to go to Apple, Google, Facebook, Twitter, and all the other companies with warrants.  Which is why they want backdoors.

Wednesday, April 18, 2018

Is there some hope for Internet Of Things security?

Maybe.  Microsoft just announced they are getting into the game:
Microsoft has designed a family of Arm-based system-on-chips for Internet-of-Things devices that runs its own flavor of Linux – and securely connects to an Azure-hosted backend. 
Dubbed Azure Sphere, the platform is Microsoft's foray into the trendy edge-computing space, while craftily locking gadget makers into cloud subscriptions.
I know what you are thinking: Microsoft is solving a security problem?  Well, maybe.  Microsoft got a bad security reputation 20 years ago, but have been doing a credible job for quite some time now.  Besides, they address what are probably the top IoT security issues:

1. The people who write the IoT apps don't know the first thing about security, and so make mistakes that everyone else has known how to prevent for 20 years: insecure default passwords, poor network security hygene, bad coding that allows common attacks, etc.  Because Microsoft is providing  a development environment for creating these apps, they can provide a sane set of default settings that will make these sorts of attacks a lot harder.  I'm not sure if they will do this, but they could.

2. The people who write the IoT apps mostly don't have an auto-update mechanism to roll out new security fixes.  Most of these will not be in the app itself, but will rather be in the underlying Operating System code.  Microsoft has an update mechanism built into the system, so this will be automagic.  The IoT app developer doesn't have to know anything about security to get this.

These two changes will potentially move the needle a lot to make the systems more secure.  We'll have to see how things play out, but this is a positive move.

Friday, March 9, 2018

President Trump: NASA rocket launches "40-50 times" the cost of Falcon Heavy

It seems like his numbers hold up: $90M per launch for Falcon Heavy vs. $3B for NASA's new Space Launch System.

We've known since the '70s that NASA would end up like Amtrak if they tried to be StarTrack.  40x more expensive puts Amtrak to shame.


I must give Obama credit where credit is due - he had his administration get pretty much completely out of the way of commercial space exploration.  We are seeing a dramatic reduction is price/pound to low earth orbit, which Trump recognizes.  Maybe Moon colonies are back on the table, with commercial launch taking the well understood portion of the problem and NASA focusing on what they did so magnificently in the 1960s - pushing the technology envelope.

But we've known for a long time that space exploration would have to be done by private commerce.

Wednesday, February 7, 2018

LOL



Wednesday, January 24, 2018

R.I.P. Moore's Law

This seems like a pretty big deal:
The death of Moore’s law is no surprise, because the semiconductor industry has told contradictory stories for years. While it created new process nodes like clockwork, the capital requirements to develop those new devices climbed nearly exponentially. 
The laws of physics were to blame: they created a money pit into which Intel and the other companies threw tens of billions of dollars, with little to show for it. 
Physics was a tough enough opponent, but now computer science itself has joined the fight thanks to the Meltdown and Spectre design flaws first revealed here in The Register
The two mistakes mean that branch prediction techniques, designed to further improve the performance of ever-cheaper silicon, have introduced two classes of security threats - one set (Meltdown) that can be remediated by imposing as much as a 30% performance penalty - and another set (Spectre) that at this point can’t really be remediated at all - except, possibly, by littering code with instructions that suck all the benefits out of branch prediction. 
The computer science behind microprocessor design has therefore found itself making a rapid U-turn as it learns that its optimisation techniques can be weaponised. The huge costs of Meltdown and Spectre - which no one can even guess at today - will make chip designers much more conservative in their performance innovations, as they pause to wonder if every one of those innovations could, at some future point, lead to the kind of chaos that has engulfed us all over the last weeks. 
One thing has already become clear: in the short term, performance will go backwards. The steady and reliable improvements every software engineer could rely on to make messy code performant can no longer be guaranteed. Now the opposite applies: it’s likely computers will be less performant a year from now.
Software has been increasingly bloated for about as long as I can remember - especially since Windows XP.  Linux has seen a noticeable slow down over the past 10-15 years, and Linux has about as pure a performance optimization/old school software hacker ethos as anything these days.

Likely mobile phones will be hardest hit, as the iOS/Android bloat continues unabated and power draw prevents a brute force "turn up the clock speed" approach.  Slower CPUs combined with bloated, slower software will give a lot worse user experience.

We are seeing the passing of an age of innocence.

Tuesday, January 9, 2018

Linux: still runs on a 486 computer

While this seems like a bit of a goofy project, this demonstrates why Linux is so popular in the server world.  Modern Linux runs on ancient computers:
What is the oldest x86 processor that is still supported by a modern Linux kernel in present time?
I asked the above quiz question during the Geekcamp tech conference in Nov 2017 during my emcee role. The theoretical answer as you can glean from the title of this post is the 486 which was first released in 1989.
He got a modern operating system running on a 25 year old (!) computer.  This is all good fun, of course, but there are some really important take aways:

1. Linux has exceptional support for old hardware.  One of the reasons people have used it is that instead of throwing away old Windows computers, they can turn them into servers.

2. The operating system is very, very stable.  You can have some confidence turning your old hardware into servers because very little in the OS is hardware dependent and so it just keeps running.

All in all, this is a pretty interesting article (although it's very linux geeky).

Tuesday, November 7, 2017

Robot surgeons - outcomes are worse with them than without

It seems that some "Insanely Great" ideas aren't so hot when you measure actual outcomes:
Robot-assisted surgery costs more time and money than traditional methods, but isn't more effective, for certain types of operations. 
... 
The researchers, led by Stanford visiting scholar Gab Jeong, weighed outcomes for both robot-assisted and traditional laparoscopic kidney removal and rectal resection. With kidney surgery, they found that where surgeons used a robot, the procedure time dragged on more than four hours in 46.3 per cent of cases, compared to just 28.5 per cent of cases where the surgeon worked without a mechanical assistant.
There are likely many issues in play here: immature technology, a long and bureaucratic FDA approval process, and high levels of training needed for surgeons and nurses.  The technology at least will evolve, but this points out the difficulty in introducing "game changing" technologies in mission-critical fields.

More importantly, it shows the difficulty in targeting the types of problems where robots can improve outcomes.  This is probably a lot harder than it seems, and with the expense of the FDA approval a lot riskier, too.


Tuesday, October 24, 2017

So what is Bitcoin, anyway?

Glen Filthie asked if I could do a post about crypto currencies - what's the big deal, and why does anyone care?  I'm not an expert, but here's a quick overview.

Why would anyone want a Cryptocurrency?

Most financial transactions are controlled by central banks like the Federal Reserve or Bank of England, etc.  Electronic transactions are done bank-to-bank through networks like the SWIFT network.  This is globally scalablable and very convenient, but is explicitly not anonymous - the central bank (i.e. the Government) knows where your money is going.

If you want anonymous financial transactions, you really need to use cash.  Credit cards (or ATM/Debit cards) transactions are all done through a centralized organization (your bank or MasterCard/VISA/AmEx/etc), and so are, again, explicitly not anonymous.

The problem with cash is that you have to be physically present to buy something.  You can't just go online to order something from Joe's Pretty Good Cake Shoppe.  You need to get in the car and schlep on down to Joe's.  That's inconvenient if you are in Oklahoma City and Joe is in London.

This is where Cryptocurrency in general and Bitcoin in particular come in.  It is a distributed, peer-to-peer currency based on encryption technology.  Since it is distributed, there is no central authority involved, i.e. the Government can't get all up in your business when you buy something.

How does Bitcoin Work?

Bitcoin, like all cryptocurrencies (well, the ones I've looked at) use a built-in ledger system.  When you spend a bitcoin, both you and the other party cryptographically sign the ledger transferring the coin.  The ledger is called the Blockchain and is maintained in a distributed manner by a number of Internet servers that essentially maintain a distributed database of bitcoins.  When you sign the blockchain, that transaction is broadcast to the network which validates the transaction and adds it to its transaction database (the blockchain ledger).

You will notice that the government is not involved in any of this, so you have the possibility of anonymous payment without having to physically hand over cash.  There's a pretty good introduction to how Blockchain works at Zerohedge.

So who "mints" Bitcoins?

Each cryptocurrency has its own way to cryptographically creating new coins.  This is called "mining" and is very CPU intensive.  The encryption algorithms used are designed to be highly resistant to forgery (as you can imagine this is an absolute requirement for a currency) but the downside is that you need to do a lot of calculations.

Interestingly, we're starting to see coin mining being used behind the scenes, as a replacement for web ads.  We are also beginning to see malware that does coin mining on your computer, rather than doing click fraud.  As always, it's the advertisers and Black Hats who figure out how to monitize the 'net.

Each cryptocurrency has designed a limit for how many coins can be mined.  Bitcoin will only allow 21 Million coins.  They expect this to be reached in 20 years or so.

How do I use it?

You need software (typically called a "wallet").  There are web-based wallets that maintain everything on the 'net, you can install software on your computer (remember to back up your data!), and there are hardware smart cards that will keep your bitcoins on an easily transported (and possibly harder for malware to steal) device.

You can spend Bitcoins wherever they are accepted.  Paypal does (or did) accept bitcoins, as do kind of a lot of other places.

C'mon Borepatch - you know this is just for buying weed, right?

Whenever you talk about Bitcoin, there's a lot of talk about the "Dark Internet", underground economy, and black market.  There's a problem with this.

Your Bitcoin identity is not anonymous like with cash.  You need a pseudonym to use it.  Depending on your operational security this may be easy or hard to link to your physical identity.  This gets into cloak and dagger tradecraft, which I won't go into here, but caveat emptor.  If you're looking to buy weed off the Dark Net then you'd want very good tradecraft indeed I would imagine.

Ransomware (like WannaCry) have demanded payment in Bitcoin, so there's attention in the Bad Guy community.

Other than Anarcho-capitalist techno-cred (which probably has peaked anyway), it looks like most of the action in Bitcoin is financial speculation.  This is really high risk because there are nearly a thousand different cryptocurrencies and most are very likely going to end up worthless.


Do Governments hate Bitcoin?

Probably.  Remember, it was designed to be distributed, not requiring a central bank.  Governments like central banks because it gives them a control point.  There's some speculation that governments will crack down, and China (at least) has outlawed purchase of physical goods using bitcoin.  Where this will go remains to be seen.

So there you have it, the world's shortest overview of Bitcoin.

UPDATE 20 December 2017 10:16: Robert Graham has some interesting thoughts on Bitcoin here.