86 stories
·
0 followers

Your Objections to the Google-Fitbit Merger

1 Share

EFF Legal Intern Rachel Sommers contributed to this post.

When Google announced its intention to buy Fitbit in April, we had deep concerns. Google, a notoriously data-hungry company with a track record of reneging on its privacy policies, was about to buy one of the most successful wearables company in the world —after Google had repeatedly tried to launch a competing product, only to fail, over and over.

Fitbit users give their devices extraordinary access to their sensitive personal details, from their menstrual cycles to their alcohol consumption. In many cases, these "customers" didn't come to Fitbit willingly, but instead were coerced into giving the company their data in order to get the full benefit of their employer-provided health insurance.

Companies can grow by making things that people love, or they can grow by buying things that people love. One produces innovation, the other produces monopolies.

Last month, EFF put out a call for Fitbit owners' own thoughts about the merger, so that we could tell your story to the public and to the regulators who will have the final say over the merger. You obliged with a collection of thoughtful, insightful, and illuminating remarks that you generously permitted us to share. Here's a sampling from the collection:

From K.H.: “It makes me very uncomfortable to think of Google being able to track and store even more of my information. Especially the more sensitive, personal info that is collected on my Fitbit.”

From L.B.: “Despite the fact that I continue to use a Gmail account (sigh), I never intended for Google to own my fitness data and have been seeking an alternative fitness tracker ever since the merger was announced.”

From B.C.: “I just read your article about this and wanted to say that while I’ve owned and worn a Fitbit since the Charge (before the HR), I have been looking for an alternative since I read that Google was looking to acquire Fitbit. I really don’t want “targeted advertisements” based on my health data or my information being sold to the highest bidder.”

From T.F.: “I stopped confirming my period dates, drinks and weight loss on my fitbit since i read about the [Google] merger. Somehow, i would prefer not to become a statistic on [Google].” 

From D.M.: “My family has used Fitbit products for years now and the idea of Google merging with them, in my opinion, is good and bad. Like everything in the tech industry, there are companies that hog all of the spotlight like Google. Google owns so many smaller companies and ideas that almost every productivity and shopping app on any mobile platform is in some way linked or owned by them. Fitbit has been doing just fine making their own trackers and products without any help from the tech giants, and that doesn’t need to stop now. I'm not against Google, but they have had a few security issues and their own phone line, the pixel, hasn't been doing that well anyway. I think Fitbit should stay a stand alone company and keep making great products.”

From A.S.: “A few years back, I bought a Fitbit explicitly because they were doing well but didn't seem to be on the verge of being acquired. I genuinely prefer using Android over iOS, and no longer want to take on the work of maintaining devices on third party OSes, so I wanted to be able to monitor steps without thinking it was all going to a central location.

Upon hearing about the merger, I found myself relieved I didn’t use the Fitbit for long (I found I got plenty of steps already and it was just a source of anxiety) so that the data can't be merged with my massive Google owned footprint.”

From L.O.: “A few years ago, I bought a Fitbit to track my progress against weight-loss goals that I had established. Moreover, I have a long-term cardiac condition that requires monitoring by a third-party (via an ICD). So I wanted to have access to medical data that I could collect for myself. I had the choice to buy either an Apple Watch, Samsung Gear, Google Fit gear, or a Fitbit. I chose to purchase a Fitbit for one simple reason: I wanted to have a fitness device that did not belong to an OEM and/or data scavenger. So I bought a very expensive Fitbit Charge 2. I was delighted by the purchase. I had a top-of-the-line fitness device. And I had confidence that my intimate and personal data would be secure; I knew that my personal and confidential data would not be used to either target me or to include me in a targeted group.

Now that Google has purchased Fitbit, I have few options left that will allow me to confidentially collect and store my personal (and private) fitness information. I don't trust Google with my data. They have repeatedly lied about data collection. So I have no confidence in their assertions that they will once again "protect" my data. I trust that their history of extravagant claims followed by adulterous actions will be repeated.

My fears concerning Google are well-founded. And as a result, I finally had to switch my email to an encrypted email from a neutral nation (i.e., Switzerland). And now, I have to spend even more money to protect myself from past purchases that are being hijacked by a nefarious content broker.  Why should I have to spend even more money in order to ensure my privacy? My privacy is guaranteed by the United States Constitution, isn't it? And it in an inalienable right, isn't it? Since when can someone steal my right to privacy and transform it into their right to generate even more money? As a citizen, I demand that my right to privacy be recognized and defended by local, state, and federal governments. And in the meantime, I'm hoping that someone will create a truly private service for collecting and storing my personal medical information.”

From E.R.: “Around this time last year, I went to the Nest website. I am slowly making my condo a smart home with Alexa and I like making sure everything can connect to each other. I hopped on and was instantly asked to log in via Google. I was instantly filled with regret. I had my thermostat for just over a year and I knew that I hadn't done my research and the Google giant had one more point of data collection on me - plus it was connected to my light bulbs and Echo. Great. 

Soon, I learn the Versa 2 is coming out - best part? It has ALEXA! I sign up right away—this is still safe. Sure. Amazon isn't that great at data secrets, but a heck of a lot better than Google connected apps. Then, I got the news of the merger. I told my boyfriend this would be the last FitBit I owned—but have been torn as it has been a motivating tool for me and a way to be in competition with my family now that we live in different states. But it would be yet another data point for Google, leaving me wondering when it will possibly end. 

This may be odd coming from a Gmail account—but frankly, Gmail is the preferred UI for me. I tried to avoid Google search, but it proved futile when I just wasn't getting the same results. Google slowly has more and more of my life—from YouTube videos, to email, to home heating, and now fitness... when is enough enough?”

From J.R.: “My choice to buy a Fitbit device instead of using a GoogleFit related device/app is largely about avoiding giving google more data. 

My choice to try Waze during its infancy was as much about its promise to the future as it was that it was not a Google Product and therefore google wouldn't have all of my families sensitive driving data.

Google paid a cheap 1 Billion to purchase all my data from Waze and then proceed to do nothing to improve the app. The app actually performs worse now on the same phone, sometimes taking 30 minutes to acquire GPS satellites that Google Maps (which i can't uninstall) see immediately. 

Google now has all my historic driving data for years.... besides the fact that there is no real competitor to Waze and it does not seem like any company will ever try to compete with Google again on Maps and traffic data... why not continue using it? from my history, they can probably predict my future better than me.

The same with Fitbit... Now google will know every place I Run, Jog and walk.... not just where I park but exactly where i go.... is it not enough for them to know i went to the hospital but now they will know which floor (elevation), which wing (precise location data).... they will get into mapping hospitals and other areas.... they will know exactly where we are and what we are doing....  

They will also sell our health data to various types of insurance companies, etc.

I believe Google should be broken up and not allowed to share data between the separate companies. I don't believe google should be able to buy out companies that harvest data as part of their mission. If google buys fitbit, i will certainly close the account, delete what I can from it and sell the fitbit (if it has value left)....”

While the overwhelming majority of comments sought to halt the merger, a few people wrote to us in support of it. Here's one of those comments.

From T.W.: “I'm really looking forward to the merger. I see the integration of Fitbit and Google Fit as a great bonus and hope to get far more insights than I get now. Hopefully the integration will progress really soon!”

If you're a Fitbit owner and you're alarmed by the thought of your data being handed to Google, we'd love to hear from you. Write to us at mergerstories@eff.org, and please let us know:

  • If we can publish your story (and, if so, whether you'd prefer to be anonymous);
  • If we can share your story with government agencies;
  • If we can share your email address with regulators looking for testimony.


Read the whole story
michelslm
9 days ago
reply
Share this story
Delete

Old Days 2

1 Comment and 12 Shares
The git vehicle fleet eventually pivoted to selling ice cream, but some holdovers remain. If you flag down an ice cream truck and hand the driver a floppy disk, a few hours later you'll get an invite to a git repo.
Read the whole story
michelslm
9 days ago
reply
Share this story
Delete
1 public comment
alt_text_bot
12 days ago
reply
The git vehicle fleet eventually pivoted to selling ice cream, but some holdovers remain. If you flag down an ice cream truck and hand the driver a floppy disk, a few hours later you'll get an invite to a git repo.
crazyscottie
12 days ago
FWIW, the comics and alt text are working for me on mobile. Is alt_text_bot still needed?
dukeofwulf
12 days ago
I like it. Even on desktop it's easier for me to read than the hover text.
AlexHogan
12 days ago
I like it too. Good Bot!
acdha
12 days ago
It’s a tradition at this point
steelhorse
12 days ago
For the longest time I couldn't finish my program because someone had misplaced the "blue" punch card.
9a3eedi
11 days ago
Whoever made this bot, I love you

Apple and the ARM Transition

1 Share
I didn't have any particular posts planned for this week, but as is often the case, the news offered a useful topic: Apple's upcoming ARM transition.


For years now, people have been discussing the possibility of Apple moving to ARM processors for their desktops and laptops.  Their mobile devices have been ARM from the beginning, and with Apple's modern phone processors giving Intel's x86 offerings a solid competitor, it's been an obvious question - but just an interesting one, until recently the rumors solidified and it sounds like this is a thing that's going to happen.

I know my way around some deep processor weeds, so this has been interesting to me.  There's a ton of uninformed garbage on the internet, as usual, so I'll offer my attempt to clarify things!

And as of Monday, I'll probably be proven mostly wrong...


x86?  ARM?  Power? 68k?  Huh?

If (when?) Apple makes this change, it's going to be their third major ISA (Instruction Set Architecture) transition in their history (68k -> PPC, PPC -> Intel, Intel -> ARM) - making them pretty much the only consumer hardware company to regularly change their processor architecture, and certainly the largest to pull it off regularly.

The instruction set is, at nearly the lowest level, the "language" that your processor speaks.  It defines the opcodes that execute on the very guts of the processor, and there have been quite a few of them throughout the history of computing.  Adding two numbers, loading a value from memory, deciding which instruction to execute next, talking to external hardware devices - all of this is defined by the ISA.

In the modern world of end user computing, there are only two that matter, though: ARM and x86 (sorry, RISC-V, you're just not there yet).  Up until recently, the division was simple.  x86 processors (Intel and AMD) ran desktops, laptops, and servers.  ARM processors ran phones, tablets, printers, and just about everything else.

Except for a few loons (like me) who insisted against all reason on making little ARM small board computers to function as desktops, this was how things worked.  But not anymore!

Intel and AMD: The Power of x86

For decades now, the king of the performance hill has been the x86 ISA - Intel and AMD's realm (for within a rounding error - yes, I know there have been other implementations, I've owned many of them, and they don't matter anymore).  Pentium.  Core.  Athlon.  Xeon.  These are the powerhouses.  They run office applications.  They run games.  They run the giant datacenters that run the internet.  If it's fast and powerful, it's been x86.  Intel is the king, though AMD has made a proper nuisance of themselves on a regular basis (and, in fact, the 64-bit extensions to x86 you're probably running were developed by AMD and licensed by Intel).


Desktops, laptops, servers.  All powered by the mighty x86.  From power sipping Atom processors in netbooks to massive Xeons and Opterons in data centers, x86 had you covered.  It's been the default for so long in anything powerful that most people don't even think about it.

But the x86 ISA is over 40 years old (introduced in 1978), and it's accumulated a lot of cruft and baggage over the years.  They're maddeningly complex, with all sorts of weird corners and nasty gotchas - and that's just gotten worse with time.  MMX.  SSE.  VMX.  TSX.  SGX.  TXT.  SMM.  You either have no idea what those are, or you're shuddering a bit internally.

All of it adds up.  Go read over the Intel SDM for fun, if you're bored.  It's enjoyable, I promise - but it's also pretty darn complicated.

ARM: Small, Power Efficient, and Cheap

Historically, ARM has been the lower power, far cheaper, power efficient alternative to x86.  The ARM company releases CPU cores that most people use (A series in phones and laptops, if you know what M or R series cores are, you don't need this blog post).  They're a set of compromises - fairly small in terms of die area (so "cheap to make") and power efficient.  They're not focused on blazing fast performance.  But they're efficient, usually good enough, and as a result they've ended up in just about everything that's not a desktop or laptop.

I've had a habit of messing around with these chips in things like the Raspberry Pi and Jetson Nano - they are good enough, for radically less money than a flagship Intel system.

In the last year or two, ARM chips started firing solid cannon blasts across the bow of the Intel Xeon chips in the datacenter.  If you don't care about raw single threaded performance, but care an awful lot about total throughput and power efficiency, ARM server chips are properly impressive.  For less money than a Xeon, you can have more compute on fewer watts.  Not bad!


They've made their way into laptops and desktops, but they've not been a threat to the flagship performance of Intel.

Except... there's Apple.  Who, on occasion, is totally insane.

Well... or not.  Apple's ARM: Big, Power Efficient, and Fast

Most companies just license ARM's cores and attach their desired peripherals to them.

But a few companies have the proper licenses and engineering staff to build their own, totally custom ARM cores.  Apple is one of these companies.  Starting with the A6 processor in the iPhone 5, they were using their own custom processor implementations.  They were ARM processors, sure, but they weren't the ARM processor implementations.  The first few of these were fairly impressive at the time, but still were clearly phone chips.  And then Apple got crazy.

If you own an iPhone 11, it's (probably) faster than anything else in your house, in terms of single threaded tasks - on a mere few watts!


Apple has quietly been iterating on their custom ARM cores, and has created something properly impressive - with barely anyone noticing in the meantime!  Yeah, the review scores are impressive, yeah, modern Android devices are stomped by the iPhone 8 on anything that's not massively parallel, but they're phones.  They can't threaten desktops and laptops, can they?

They can - and they are.

A laptop has a far larger thermal and power envelope than a phone.  The iPhone 11's battery is 3.1Ah @ 3.7V - 11.47Wh.  The new 16" MacBook Pro has a 100Wh battery - 8.6x larger.  Some of that goes to the screen, but there's a good bit of power to throw at compute.  And if you plug a computer in, well, power's not a problem!

This is interesting, of course - but then there's how Intel has found themselves stuck.

Intel: Stuck on 14nm with a Broken uArch

If you've kept up with Intel over the last few years, none of this will be a surprise.  If you haven't, and have just noticed they regularly release new generations of processors, pay attention now.

Intel has been stuck on their 14nm process for about 4 years.  Their microarchitecture is fundamentally broken in terms of security.  And, worse, they can't reason about their chips anymore.

Yes, they've been iterating on 14nm, and releasing 14nm+, 14nm++, 14nmXP, 14nmBBQ, and they're improvements - but everyone else has leapfrogged them.  Intel used to have an unquestioned process technology lead over the rest of the tech industry, and it showed.  Intel chips (post Netburst) used less power and gave more performance than anything else out there.  They were solid chips, they performed well, and they were worth the money for most cases.  But they've lost their lead.

Not only have they lost their lead, their previous chips have been getting slower with age.  You've probably heard of Meltdown and Spectre, but those are only the camel's nose.  There has been a laundry list of other issues.  Most of them have fixes - some microcode updates, some software recompilation, some kernel workarounds... but these all slow the processor down.  Your Haswell processor, today, runs code more slowly than it did when it came out.  Whoops.

Finally, Intel can't reason about their chips anymore.  They seem unable to fully understand the chips, and I don't know why - but I do know that quite regularly, some microarchitectural vulnerability comes out that rips open their "secure" SGX enclaves - again.  In the worst of them, L1 Terminal Fault/Foreshadow, the fix is simple - flush the L1 cache on enclave transition.  It's an expensive enough world switch that the hit isn't too bad, but Intel didn't know it was a problem until they were told.

So, why would you want to be chained to Intel anymore?  There just aren't any good reasons.  There's AMD, who has caught up, but I just don't see Apple moving from one third party vendor to another.  They're going to pull it in-house - which should be very, very interesting.

Apple's ARM Advantages

Given all this, why would Apple be considering ARM chips for their Macs?

Because it gives them better control over the stack, which should translate, rather nicely, to a better user experience (at least for most users).

They'll be able to have the same performance on less power - and, in the bargain, be able to integrate more of their "stuff" on the same chip.  There won't be a need for a separate T2 security processor when they can just build it into their main processor die.  They won't be stuck dealing with "Whatever GPU abomination Intel has dumped in this generation" - they can design it for their needs, with their hardware acceleration, for the stuff they care about (yes, I know Intel isn't as bad as they used to be, but I sincerely doubt Apple is happy with their integrated GPUs).

Fewer chips means a smaller board - which generally means less power consumption or better performance.

One thing I've not seen mentioned other places is that Apple is literally sitting on the limit of how much battery they can have in a laptop.  They claim their laptops have a 100Wh battery - with a disclaimer.


What, you might ask, is wrong with a 100Wh battery, such that there's a disclaimer saying that it's actually 0.2Wh less?


There are a lot of shipping (and carry on luggage) regulations that use 100Wh as a limit somewhere - and they're usually phrased, "less than 100Wh" for the lowest category.  Apple is being super explicitly clear that their laptops are less than 100Wh - even if they advertise them as being equal to 100Wh.  USPS shipping, UPS, TSA... all of these have something to say about 100Wh.

Apple literally can't put more battery in their laptops.  So the only way to increase runtime is to reduce the power they use.  Their ARM chips almost certainly are using less power than Intel chips for the same work.

Or, a smaller, lighter laptop for the same runtime.  Either way, it's something that sets them apart from the rest of the industry, who will be stuck on Intel for a while longer.  Microsoft's ARM ambitions have gone roughly nowhere, and Linux-only hardware isn't a big seller outside Chromebooks.

They also gain a lot from being able to put the same "stuff" in their iOS devices (which have a huge developer community, even if a lot of them are pushing freemium addictionware) as in their laptops.  Think machine learning accelerators, neural network engines, speech recognition hardware.  The gap between what an iPhone can do and what a MacBook Pro can do (for the things where the phone is better) ought to narrow significantly.  We might even see a return of laptops with built in cell modems!

Apple tries to control the whole stack, and this removes a huge component that they don't control.  And it should be good!

Now, to address a few things that have been the subject of much discussion lately...

x86_64 on ARM Emulation

One of the sticking points about Apple switching to ARM is that they're based on x86 right now - so, what happens to your existing software?  Switching to ARM means that x86 software won't work without some translation.  It turns out, Apple has a massive amount of experience doing exactly this sort of thing.  They made 68k binaries run fine on PowerPC, and they made PowerPC binaries run acceptably on x86.  I fully expect they'll do the same sort of thing to make x86 binaries run on ARM.

Emulating is hard, especially if you want to do it fast.  There's a lot of work in the JIT (Just In Time) space for things like Javascript, but if Apple does ship emulation (which I think they will), I'd expect a JIT engine to only be used infrequently.  The real performance would come from binary translation - treating the x86 binary as source code that gets turned into an ARM binary.  This lets you figure out when some of x86's weirder features (like the flags register being updated on every instruction) are needed, and when they can be ignored.  One could also, with a bit of work, probably figure out when ARM's rather relaxed memory model compared to x86 will cause problems and insert some memory fences.  It's not easy, but it's certainly the type of thing Apple has done before.

Plus, ARM has more registers.  ARMv8 has 32 registers, and x86_64 only has 16 main architectural registers (plus some MSRs, segments... it's a mess as you go deeper).  That tends to make emulation a bit more friendly.

I have no doubt that the emulation will "go native" at the kernel syscall interfaces - they won't be running an x86 kernel in emulation to deal with x86 binaries.  This is well established in things like qemu's userspace emulation.

What would be very interesting, though, and I think Apple might pull this off, would be "going native" at the library calls.  If you're recompiling a binary, you know when the code is going to call into an Apple library function (assuming the binary isn't being deliberately obscure - which most aren't).  If they got a bit creative, emulated software could jump from the emulated application code to native library code.  This would gain a lot of performance back, because all the things Apple's libraries do (which is a lot) would be at native performance!

Remember, Apple has their own developer stack and can tweak the compilers as they want.  It wouldn't surprise me if they've been emitting emulation-friendly x86 for the past few years.

Of course, we'll find out Monday.

Virtualization: Goodbye Native Windows Performance

The next question, for at least a subset of users, is "But what about my virtual machines?"

Well, what about them? ARM supports virtualization, and if all you want to do is run a Linux VM, just run an ARM Linux VM!

But if you need x86 applications (say, Windows 10)?  This is where it will be interesting to watch.  I fully expect solutions out there - VirtualPC, years ago, ran x86 Windows on PowerPC hardware.  I just don't know what sort of performance you can get out of full system emulation on ARM.  Normally, things have been the other way - emulate a slow ARM system on a fast x86 machine.  The performance isn't great, but it's good enough.  Going the other way, though?  Nobody has really tried, because we've never had really fast ARM chips to mess with.  Of course, if Apple's chips are 20-30% faster than the comparable Intel chips, you can spend some time on emulation and still come out even or ahead.

If you're hoping to run some weird x86 utility (Mikrotik Winbox comes to mind as the sort of thing I'd run), I'm sure there will be good enough solutions.  Maybe this will light a fire under Microsoft to fix their ARM Windows and emulator.

But for x86 games?  Probably not.  Sorry.

The ARM64 Software Ecosystem: Yay!

Me, though?  I'm really excited about something that almost nobody else cares about.

The aarch64 software ecosystem is going to get fixed, and fast!

I've been playing with ARM desktops for a while now - Raspberry Pi 3, Raspberry Pi 3B+, Raspberry Pi 4, and the Jetson Nano.  Plus a soon-to-be-reviewed PineBook Pro (which I'm typing this post on).


The Raspberry Pis have made the armhf/aarch32 ecosystem (32-bit ARM with a proper floating point unit) tolerable.  That plus the ARM Chromebooks means that ARM on 32-bit is quite livable.

The 64-bit stuff, though?  Stuff just randomly fails.  Clearly, there is very little development and polishing on it, and you end up with things like Signal not building on 64-bit ARM, because some ancient dependency deep in the Node dependencies doesn't know what aarch64 means (why it should matter in the first place... well, web developers gonna web developer).

I expect all of this to be fixed - and fast!  Which is great news for light ARM systems!

Intel's Future

Where does this leave Intel?

It depends a bit on just how good Apple is, and how soon Intel can get back on the tracks.  If they iterate quickly, get their sub-10nm processes working, and fix their microarchitecture properly (instead of using universities as their R&D/QA labs), we should be in for a good decade of back and forth.  Apple's team is excellent, Intel's team... well, used to be excellent, at least, and they've certainly both got resources to throw at it.  As we crash headlong into the brick wall at the end of Moore's Law, getting creative (ideally in ways that don't leak secrets) is going to be important.

We could end up with something like the state of phones.  Anandtech's iPhone 11 review has benchmark charts like this entertaining test of web performance.  Apple, Apple's previous generation, Apple's generation before that, a big gap, and then the rest of the pack.  And, yes, they do mention that the iPhone 11 is a desktop grade chip in terms of SPEC benchmark performance.


Will it be this bad? Probably not.  But it's a possibility.  It really depends on what Apple's chip designers have waiting in the wings for when someone says, "Yes you can haz 35W TDP!"

But between Apple likely moving away from Intel, and data centers working out that the ARM options are cheaper to buy, cheaper to run, and faster in the bargain?  Intel might have a very hard hill to climb - and they may not make it back up.

Coming up: More ARM!

If you're enjoying the ARM posts, there's more to come!  My PineBook Pro arrived a few weeks ago, and I've been beating on it pretty hard as a daily driver.  What makes it tick?  How is it?  How can we fix rough edges? All that, and more, in weeks to come!

Plus solar progress, if the weather ever agrees...
Read the whole story
michelslm
15 days ago
reply
Share this story
Delete

Computers as I used to love them

1 Share
Illustration by Yulia Prokopova

I’ve been struggling with file sync solutions for years. In the beginning, Dropbox was great, but in the last few years, they started to bloat up. I moved to iCloud, but it was even worse. Finally, a few days ago, after iCloud cryptically broke again, I decided it’s time to try something different.

I tried Syncthing, a free and open-source alternative. And you know what? It’s been liberating. The sanity, the simplicity, the reliability, the features. It brings the joy of use and makes you believe the collapse of civilization can be slowed down a bit.

Syncthing is everything I used to love about computers.

It’s amazing how great computer products can be when they don’t need to deal with corporate bullshit, don’t have to promote a brand or to sell its users. Frankly, I almost ceased to believe it’s still possible. But it is.

Installation

You download a single binary executable. You run it. There’s no step three.

No, seriously. It’s so simple I thought I missed something. But no. After you run that binary, you have a fully operational node of Syncthing. It’s ready to sync with any other Syncthing node, no other setup necessary. There’s no installers, no package management (but there are packages if you want to), no registration, no email, no logins, no password creation, no 2FA, no consents, no user agreements. Just download and run. Heck, setting up autostart on Linux server was more complex than just running the app itself!

Homebrew makes it even simpler:

Just to give you the perspective, these are all the steps that Dropbox puts you through when you install it on a new computer:

Aaaaand… that’s not all! You also get this annoying notificaiton to deal with:

Only at this point can you start using Dropbox. Luckily, I already had an account, otherwise, it would be 5 more steps. Ridiculous!

(It goes without saying, that all of these are different windows. It does not happen in a single predictable area, mind you. You have to chase every one of them. And the “Set Up Dropbox” window is always-on-top, so it hides other required steps, which also adds to the fun.)

No artificial limits

Because Synthing is free and doesn’t depend on server-side storage, they don’t need to put weird or unnatural restrictions on you. You can use as much space as you have on disk. You can sync as many folders as you want. You can sync any folder, no matter where it’s located. You can sync with anyone in the world. In fact, you can sync any folder with any number of people. At no point have you to wonder “but will it work with my plan”? If your hardware allows it, it will work. As simple as that.

Folders are the most vivid example of how other cloud storages constantly fuck up the simplest things. Syncthing can sync any folder on your drive, located anywhere. You can sync existing folders. You can sync multiple different folders. Folders are just folders, nothing special about them. Here I’m syncing “system” folders: ~/Desktop and ~/Library/Fonts, and three custom ones. No sweat:

This simplicity lets you use it as a tool you can apply, sometimes creatively, to your task, not as a service you have to put up with. For example, by syncing ~/Library/Fonts, if I install a font on one machine, it automatically installs everywhere.

Contrast this with Dropbox, which requires you to put everything inside ~/Dropbox folder. If you keep your projects under ~/work and want to sync it, well, tough luck. You can’t sync multiple folders either. Well, technically Dropbox can sync anything, of course. Files are files. But branding dictates there MUST be a Dropbox folder somewhere, even if it’s inconvenient for the user.

Sweet, sweet branding...

But the worst offender is the iCloud. Same as Dropbox, it also requires you to put all your stuff into a folder. But that folder is called ~/Library/Mobile Documents/com~apple~CloudDocs!!!

If you are a programmer, it’s unusable. First, you can’t in your right mind type THAT every time you need to cd. Second, it contains spaces! Which breaks all sorts of things, believe me or not, even in 2020. I can’t keep Fira Code in iCloud because of python scripts, I can’t keep Jekyll blog like this one there because of Ruby, I can’t run bazel, etc. Useless.

And if you think symlinking it to ~/icloud helps, believe me, it does not.

No registration

How do you connect two devices, if there’s no registration, accounts, email, etc? Simple! Each device has a unique id, generated automatically when you first run the program. Share this id with another device, let them share their, and you are good to go.

Best news? Those ids are not even secret. They are more like public keys, so you can exchange them freely. But the scheme only works if both devices know ids of each other.

What I like about this scheme is how beautifully simple and down-to-absolute-essentials it is. This is pure mathematics. But it’s also very convenient to use. There’re no emails, no forms, no unresponsive web pages, no invites, no expiring tokens, no failing/outdated/overloaded APIs, no password management, nothing to hold onto or “manage”.

Power mode

There’s power user mode! If you don’t care, there’s always a UI, and most of the things you can configure there. But if you’re a programmer and need more, you can:

  • Install Synthing on a headless Linux server,
  • Control it by editing XML config,
  • Control it via REST API,
  • Configure folder ignores via regular expressions.

All APIs and configs are well-documented:

For example, this is my .stignore for workspace folder:

Configure it once and forget about generated classes, vendored dependencies and other caches syncing unnecessary forever.

In contrast, iCloud has a feature to exclude *.nosync files from syncing, but you know what? I usually don’t have files called *.nosync, that’s the problem:

And Dropbox? Well… I still have nightmares about this Dropbox UI:

It’s kind of funny, how commercial apps have feature bloat but don’t have power mode. You can do more different things, but can’t configure them to your liking.

No upsell

Commercial solutions are interested in keeping users locked in and constantly upselling more features to them. As a result of that, you get notifications, features, popups. For example, on this screenshot, after I just installed Dropbox on a fresh machine:

Top to bottom:

  • I already have an annoying red dot in the menubar,
  • Link to another product (Paper), even though it has nothing to do with file synchronization,
  • A firm suggestion I should enable notifications,
  • A notification that says my Desktop app is ready for use?! I mean, I’m looking at it from the desktop app!
  • Dropbox advertising some sort of trial,
  • Dropbox selling me more space (even though it was 2 years ago and I have >50% free),
  • Large “Upgrade” button,

In the mystic “For you” tab:

we see:

  • Starred items? What is it, a high-school notepad? If I really wanted, I could tag files in the OS, but thank you.
  • Calendar sync? Why on Earth would FILE SYNCHRONIZATION application wants to access my calendar?

Wait, there’s more:

More “features”:

  • Desktop sync,
  • Photos sync,
  • Screenshots sync.

These are at least file-like? I don’t understand why they have to be “special features”, though, if you already have an app whose primary task is to sync files. It already does that. Why are some files more special than others?

The answer is simple: the only way Dropbox can survive is by building and selling more features. You’ll never have peace of mind with them.

iCloud is much younger and doesn’t have feature bloat yet, but they are still interested in selling more Macs and iPhones. So they will always try to isolate you from the rest of the world. Expect weird restrictions and great inconveniences, like iCloud folder location or moving Desktop folder when you enable/disable sync for it.

Syncthing survival, on the other hand, does not depend on making more features. They do one thing, but they do it well. Look, their menu1 looks exactly how Dropbox used to look when it still was good in 2012:

No lock-in

Another ugly thing both iCloud and Dropbox routinely do is trying to scare you from walking away. Those appear every time you move more than one file outside of iCloud folder:

And those are Dropbox versions:

It might seem like they try to explain something, but they do not. They are scared you might be leaving and try to scare you back. The tactic is simple: question your every action, even trivial operations like moving or deleting files, display huge warning signs even for safe operations, long puzzling wording (“documents stored in iCloud will be removed from Mac”) so that you never sure what will happen. That’s some shady shit.

Syncthing, on the other hand, simply doesn’t care. They don’t get any money from you, so they are not interested in creating a need or constantly reminding about themselves. If you are looking for peace of mind, you can’t have it with commercial offerings.

Conclusion

Syncthing has reminded me how great computers can be if they are not made by corporations. It’s simple, predictable, sane, acts no-nonsense. You can configure it however you like and it always keeps you in control. It’s a pure function and it’s good at that. It’s free and open-source, but I’m much more happy to donate them €10/month than e.g. Dropbox. I would be a much happier person if at least half of the programs on my Mac/iPhone were like that.

  1. If you choose to install macOS app

Read the whole story
michelslm
16 days ago
reply
Share this story
Delete

GNOME 3.36 / Endless OS 3.8

1 Share

Endless OS 3.8.0 has just been released, which brings GNOME 3.36 to our users. There’s a particularly big overlap between “improvements in Endless OS” and “improvements in GNOME” this cycle, so I wanted to take a minute to look back over what the Endless team worked on in GNOME 3.36.

Login & Unlock Screen

Allan Day has already written about the improvements to the login and unlock experience in GNOME 3.36, so I won’t retread this ground in too much detail. As he (and Nick Richards, in his trip report for Endless OS 3.8.0) mentioned, this change has been anticipated for a long time, so I’m particularly glad that Georges Stavracas and Umang Jain (together with Florian Müllner from Red Hat) could make this happen for this release. The first thing I interact with when I sit down at my computer is the login screen or the lock screen, and the refreshed design is a pleasure to use. (My daughter is sad that Granny’s cat is no longer visible on the lock screen, though.)

GNOME unlock dialog, with Will Thompson's name and face, and password “Tremendousdangerouslookingyak” visible

Peek Password

One improvement that’s perhaps most visible in the redesigned lock screen is the inline “eye” icon to reveal the text in the password field, which was implemented by Umang Jain independently of his work on the lock screen itself. The motivation for this change was actually another system dialogue: the Wi-Fi password dialogue.

During the development of the Hack product – a game-like platform for self-directed learning built atop Endless OS – the team ran many playtesting sessions. While the emphasis of these sessions was on Hack itself, the test users – typically younger teens – would often run through initial setup on a freshly-installed OS. Within a few clicks of turning on the computer, you select your Wi-Fi network and enter its password, which turned out to be a big stumbling block for many users. Wi-Fi passwords are long strings of randomly-generated characters, and on many occasions users simply couldn’t enter the password correctly. The entry has always had a Show Text option in the right-click menu, but right-clicking is itself an unfamiliar operation for younger users more familiar with mobile devices.

Parental Controls, redux

For a year or so, Endless OS has included a parental controls feature, which operates along a couple of axes:

  • Specific installed apps can be disabled for particular users. As a special case, all general-purpose web browsers are controlled by a separate toggle.
  • Not-yet-installed apps visible in GNOME Software — which we rebrand as App Center — can be filtered based on their OARS content rating metadata.
  • Users can be prevented from installing apps at all.

In past releases, this feature was hard to discover and use. At a superficial level, the UI to control it was buried in Settings → Details → Users → (select a non-administrator user) → (scroll down) → (frame within frame within frame). But the real issue was that many Endless OS systems have the child as the primary, administrator user, created through Initial Setup when the machine is unboxed. To meaningfully use parental controls, you’d need to create a separate parent user, then downgrade the child’s account, neither of which is a particularly discoverable operation.

In autumn last year, we met with Allan Day, Richard Hughes and Matthias Clasen from Red Hat to talk through this problem space. Following that, Robin Tafel, Philip Withnall and Matthew Leeds designed and implemented a new flow for parental controls. The key changes are:

  1. Parental controls can be enabled during initial setup. Check a box, choose some options, and specify a parent password.
  2. Once initial setup is complete, there is a dedicated Parental Controls app.

Screenshot of “About You” page from GNOME Initial Setup, showing “Set up parental controls for this user” checkbox (checked)

Screenshot of Parental Controls page of GNOME Initial Setup, showing options to restrict which applications can be installed or used

Screenshot of GNOME Initial Setup “Set a Parent Password” page, with two password fields and one password hint field

Screenshot of Parental Controls application, showing options to restrict which apps a user can install or run

There are a few downstream bits and bobs outstanding, such as a cross-reference from GNOME Settings’ Users panel, but the bulk of this feature is available upstream in GNOME Initial Setup, Software, and Shell 3.36. Parental controls needs close integration with the application management infrastructure, and Flatpak upstream has the necessary hooks. On Endless OS, supporting Flatpak apps — plus Chromium as a special case — is good enough, since that is the sole mechanism for installing applications. It would be great to see support in Malcontent for other package and app managers.

Special thanks to Jakub Steiner for creating a great icon at very short notice.

Malcontent icon: Silhoutte of parent and child holding hands

Renaming Folders

One of the biggest differences between vanilla GNOME and Endless OS is the app grid, which in Endless is on the desktop and fully under the user’s control. Georges Stavracas has been incrementally chipping away at this, and support for renaming folders landed in GNOME 3.36.

Screenshot of renaming a folder titled “Jeux”

The Long Tail

Besides highly-visible new features and redesigns, much (perhaps even most?) of the work of maintaining a desktop is in the parts you don’t see: improving libraries and plumbing, incremental tweaks to user interfaces, and dealing with the wide variety of hardware, software and users that interact with GNOME. Spelunking through the commit histories of various projects, I see many names of colleagues present and past, including André Moreira Magalhães and Philip Chimento respectively. Jian-Hong Pan from the Endless kernel team makes an appearance in GNOME Settings, as does a feature from erstwhile Endless kernel hacker Carlo Caione dating back to 2018.

Umang Jain, Philip Withnall and Matthew Leeds have put a lot of work into improving the robustness of GNOME Software and Flatpak, and there’s more landing as we speak. I’m particularly glad that Matthew has been tracking down missing Flatpak app updates in GNOME Software – bugs which hide information can be the trickiest ones to spot. And Philip is solving the latest Mystery of the Missing Progress Bar when installing Flatpak apps in GNOME Software.

I’m certain I’ve missed many great contributions. Please forgive me, fellow Endlessers.

A Broad Church

Perhaps my favourite part of being involved in GNOME is collaborating with great people from organisations who, in a different world, might be bitter rivals. All of the work I’ve described was a joint effort with others from the GNOME community; and, just as other distributors share the fruits of our labour, we and our users share the fruits of theirs. This is the latest in a long line of great GNOME releases – long may this trend continue.

Read the whole story
michelslm
43 days ago
reply
Share this story
Delete

GNOME is not the default for Fedora Workstation

1 Share

We recently had a Fedora AMA where one of the questions asked is why GNOME is the default desktop for Fedora Workstation. In the AMA we answered why GNOME had been chosen for Fedora Workstation, but we didn’t challenge the underlying assumption built into the way the question was asked, and the answer to that assumption is that it isn’t the default. What I mean with this is that Fedora Workstation isn’t a box of parts, where you have default options that can be replaced, its a carefully procured and assembled operating system aimed at developers, sysadmins and makers in general. If you replace one or more parts of it, then it stops being Fedora Workstation and starts being ‘build your own operating system OS’. There is nothing wrong with wanting to or finding it interesting to build your own operating systems, I think a lot of us initially got into Linux due to enjoying doing that. And the Fedora project provides a lot of great infrastructure for people who want to themselves or through teaming up with others build their own operating systems, which is why Fedora has so many spins and variants available.
The Fedora Workstation project is something we made using those tools and it has been tested and developed as an integrated whole, not as a collection of interchangeable components. The Fedora Workstation project might of course over time replace certain parts with other parts over time, like how we are migrating from X.org to Wayland. But at some point we are going to drop standalone X.org support and only support X applications through XWayland. But that is not the same as if each of our users individually did the same. And while it might be technically possible for a skilled users to still get things moved back onto X for some time after we make the formal deprecation, the fact is that you would no longer be using ‘Fedora Workstation’. You be using a homebrew OS that contains parts taken from Fedora Workstation.

So why am I making this distinction? To be crystal clear, it is not to hate on you for wanting to assemble your own OS, in fact we love having anyone with that passion as part of the Fedora community. I would of course love for you to share our vision and join the Fedora Workstation effort, but the same is true for all the other spins and variant communities we have within the Fedora community too. No the reason is that we have a very specific goal of creating a stable and well working experience for our users with Fedora Workstation and one of the ways we achieve this is by having a tightly integrated operating system that we test and develop as a whole. Because that is the operating system we as the Fedora Workstation project want to make. We believe that doing anything else creates an impossible QA matrix, because if you tell people that ‘hey, any part of this OS is replaceable and should still work’ you have essentially created a testing matrix for yourself of infinite size. And while as software engineers I am sure many of us find experiments like ‘wonder if I can get Fedora Workstation running on a BSD kernel’ or ‘I wonder if I can make it work if I replace glibc with Bionic‘ fun and interesting, I am equally sure we all also realize what once we do that we are in self support territory and that Fedora Workstation or any other OS you use as your starting point can’t not be blamed if your system stops working very well. And replacing such a core thing as the desktop is no different to those other examples.

Having been in the game of trying to provide a high quality desktop experience both commercially in the form of RHEL Workstation and through our community efforts around Fedora Workstation I have seen and experienced first hand the problems that the mindset of interchangeable desktop creates. For instance before we switched to the Fedora Workstation branding and it was all just ‘Fedora’ I experienced reviewers complaining about missing features, features had actually spent serious effort implementing, because the reviewer decided to review a different spin of Fedora than the GNOME one. Other cases I remember are of customers trying to fix a problem by switching desktops, only to discover that while the initial issue they wanted fix got resolved by the switch they now got a new batch of issues that was equally problematic for them. And we where left trying to figure out if we should try to fix the original problem, the new ones or maybe the problems reported by users of a third desktop option. We also have had cases of users who just like the reviewer mentioned earlier, assumed something was broken or missing because they where using a different desktop than the one where the feature was added. And at the same time trying to add every feature everywhere would dilute our limited development resources so much that it made us move slow and not have the resources to focus on getting ready for major changes in the hardware landscape for instance.
So for RHEL we now only offer GNOME as the desktop and the same is true in Fedora Workstation, and that is not because we don’t understand that people enjoy experimenting with other desktops, but because it allows us to work with our customers and users and hardware partners on fixing the issues they have with our operating system, because it is a clearly defined entity, and adding the features they need going forward and properly support the hardware they are using, as opposed to spreading ourselves to thin that we just run around putting on band-aids for the problems reported.
And in the longer run I actually believe this approach benefits those of you who want to build your own OS to, or use an OS built by another team around a different set of technologies, because while the improvements might come in a bit later for you, the work we now have the ability to undertake due to having a clear focus, like our work on adding HiDPI support, getting Wayland ready for desktop use or enabling Thunderbolt support in Linux, makes it a lot easier for these other projects to eventually add support for these things too.

Update: Adam Jacksons oft quoted response to the old ‘linux is about choice meme’ is also a required reading for anyone wanting a high quality operating system

Read the whole story
michelslm
59 days ago
reply
Share this story
Delete
Next Page of Stories