Inventing the Future

Discovery Part 2: How

 “The Universe is made of stories, not of atoms.” -Muriel Rukeyser

Last time, I discussed why we offer suggested locations to teleport to, and the 5 W’s of the user interaction for suggestions. This time I’ll discuss how we do that, with some level of technical specificity.

Nouns and Stories

Each suggestion is a compact “story” of the form: User => Action => Object. Facebook calls these “User Stories” (perhaps after product management language), and linguists refer to “SVO” sentences. For example, Howard => snapshot => Playa, which we display as “Howard took a snapshot in Playa”. In this case Playa is a Place in-world. The Story also records the specific position/orientation where the picture was taken, and the picture itself. Each story has a unique page generated from this information, giving the picture, description, a link to the location in world, and the usual buttons to share it on external social media. Because of the metadata on the page, the story will be displayed in external media with the picture and text, and clicking on the story within an external feed will bring them to this page.


The User is, of course, the creator of the story. The user has a page, too, which shows some of their user stories, and any picture and description they have chosen to share publicly. If allowed, there’s a link to go directly to that user, wherever they are.


For our current snapshot and concurrency user stories, the Object is the public Place by which the user entered. More generally, it could be any User, Place, or (e.g., marketplace) Thing. These also get their own pages.


The “feed” is then simply an in-world list of such Stories.


Analogously to any computer on a network and registering with ICANN, a High Fidelity user may create places at an IP address or even a free temporary place name, or they can register a non-temporary name. Places are shown as suggestions to a user only when they are explicitly named places, with no entry restrictions, matching the user’s protocol version. (When we eventually have individual user feeds, we could consider a place to be shareable to a particular user if that logged in user can enter, rather than only those with no restrictions.)

Snapshots are shown only when explicitly shared by a logged-in user, in a shareable place.



At metaverse scale, there could be trillions of people, places, things, and stories about them. That’s tough to implement, and tough for users to make use of the firehose of info. But now there isn’t that many, and we don’t want to fracture our initial pioneer community into isolated feeds-of-one. So we are designing for scale, but building iteratively, initially using our existing database services and infrastructure. Let’s look first at the design for scale:

First, all the atoms of this universe – people, places, things, and stories – are each small bags of properties that are always looked up by a unique internal identifier. (The system needs to know that identifier, but users don’t.) We will be able to store them as “JSON documents” in a “big file system” or Distributed Hash Table. This means they can be individually read or written quickly and without “locking” other documents, even when there are trillions of such small “documents” spread over many machines (in a data center or even distributed on participating user machines). We don’t traverse or search through these documents. Instead, every reason we have for looking one up is covered by something else that directly has that document’s identifier.

(There are a few small exceptions to the idea that we don’t have to lock any document other than the one being looked up. For example, if we want to record that one user is “following” another, that has to be done quite carefully to ensure that a chain of people can all decide to follow the next at the same time.)

There are also lists of document identifiers that can be very long.  For example, a global feed of all Stories would have to find each Story one or more at a time, in some order. (Think of an “infinitely scrollable” list of Stories.) One efficient way to do that is to have the requesting client grab a more manageable “page” of perhaps 100 identifiers, and then look up the document on however many of those fit on the current display. As the user scrolls, more are looked up. When the user exhausts that set of identifiers, the next set is fetched. Thus such “long paged lists” can be implemented as a JSON document that contains an ordered array of a number of other document identifiers, plus the identifier for the next “page”. Again, each fetch just requires one more document retrieval, looked up directly by identifier. The global feed object is just a document that points to the identifier of the “page” that is currently first.  Individual feeds, pre-filtered interest lists, and other features can be implemented as similar long paged lists.

However, at current scale, we don’t need any of that yet. For the support of other aspects of High Fidelity, we currently have a conventional single-machine Rails Web server, connected to a conventional Postgres relational database. The Users, Places, and Stories are each represented as a data table, indexed by identifier.  The feed is a sorted query of Stories.

We expect to be able to go for quite some time with this setup, using conventional scaling techniques of bigger machines, distributed databases, and so forth.  For example, we could go to individual feeds as soon as there are enough users for a global feed to be overwhelming, and enough of your online friends to have High Fidelity such that a personal feed is interesting, This can be done within the current architecture, and would allow a larger volume of Stories to be simultaneous added, retrieved, scored, and sorted quickly.  Note, though, that we would really like all users to be offered suggestions — even when they choose to remain anonymous by not logging in, or don’t yet have enough experience to choose who or what to follow. Thus a global feed will still have to work.


We don’t simply list each Story with the most recent ones first. If there’s a lot of activity someplace, we want to clearly show that up front without a lot of scrolling, or a lot of reading of place or user names. For example, a cluster of snapshots in the feed can often make it quite clear what kind of activity is happening, but we want the ordering mechanism to work across mixes of Stories that haven’t even been conceived of yet.

Our ordering doesn’t have to be perfect – there is no “Right Answer”. Our only duty here is to be interesting. We keep the list current by giving each Story a score, which decays over time. The feed shows Stories with the highest scores first. Because the scores decay over time, the feed will generally have newer items first, unless the story score started off quite high, or something bumped the score higher since creation. For example, if someone shares a Story in Facebook, we could bump up the score of the Story — although we don’t do that yet.

Although we don’t display ordered lists of Users or Places, we do keep scores for them. These scores are used in computing the scores of Stories.  For example, a snapshot has a higher initial score if it is taken in a high scoring Place, or by a high scoring User. This gives stories an effect like Google’s page ranking, in which pages with lots of links to them are listed before more obscure pages.

To keep it simple, each item only gets one score. While you and I might eventually have distinct feeds that list different sets of items, an item that appears in your list and my list still just has one score rather than a score-for-you and different score-for-me. (Again, we want this to work for billions of users on trillions of stories.)

To compute a time-decayed score, we store a score number and the timestamp at which it was last updated.  When we read an individual score (e.g., from a Place or User in order to determine the initial score of a snapshot taken in that Place by that User), we update the score and timestamp.  This fits our scaling goals because only a small finite number of scores are updated at a time. For example, when the score of a Place changes, we do not go back and update the scores of all the thousands or millions of Stories associated with that Place. The tricky part is in sorting the Stories by score, because sorting is very expensive on big sets of items. Eventually, when we maintain our “long paged lists” as described above, we will re-sort only the top few pages when a new Story is created. (It doesn’t really matter if a Story appears on several pages, and we can have the client filter out the small numbers of duplicates as a user scrolls to new pages of stories.) For now, though, in our Rails implementation, a new snapshot causes us to update the time-decayed score for each snapshot in order, starting from what was highest scoring. Once a story’s score falls below a certain threshold, we stop updating.  Therefore, we’re only ever updating the scores of a few days worth of activity.

Here are our actual scoring rules at the time I write this. There’s every chance that the rules will be different by the time you read this, and like most crowd-curation sites on the Web, we don’t particularly plan to update the details publicly. But I do want to present this as a specific example of the kinds of things that affect the ordering.

  • We only show Stories in a score-ordered list. (The Feed.) However, we do score Users and Places, because their scores are used for snapshots. We do this based on the opt-outable “activity” reporting:
    • Moving in the last 10 seconds bumps the User’s score by 0.02.
    • Entering a place bumps the Place’s score by 0.2.
  • Snapshot Stories have an initial score that is the decayed average of the User and Place – but a minimum of 1.
  • Concurrency Stories get reset whenever anyone enters or leaves, to a value of nUsersRemaining/2 + 1.
  • All scores have a half-life of 3 days on the part of the score up to 2, and 2 hours for the portion over 2. Thus a flurry of activity might spike a user or place score for a few hours, and then settle into the “normal high” of 2.  This “anti-windup” behavior allows things to settle into normal pretty quickly, while still recognizing flash mob activity.


For example, under these rules, one needs to move for about 3:20 minutes / day to keep your score nominally high (2.0).  More activity will help the snapshots you create during the activity, but only for a while, and snapshots the next day will only have an nominally high effect.

As another example of current rules, an event with 25 people will bump a place score by 5:

  • If it started at 2, it will back down to 4.5 in two hours, 2.5 in six hours, and back to 2 in 10 hours.
  • If it started at 0, it we be at 3.5 in two hours, and then roughly as above.


We currently search the filter on the client, filtering from the 100 highest scoring results that we receive from the server. Each typed word appears exactly (except for case) within a word of the description or other metadata (such as the word ‘concurrency’ for a concurrency story). There is no autocorrect nor autocomplete, nor pluralization nor stemming. So, typing “stacks concurrency” will show only the concurrency story for the place named stacks. “howard.stearns snapshot” will show only snapshots taken by me.

When the volume of data gets large enough, I expect we’ll add server-side searching, with tags.


We feel that by using the “wisdom of crowds” to score and order suggestions of what to do, we can:

  • Make it easy to find things you are interested in
  • Make it easy to share things you like
  • Allow you to affirm others’ activities and have yours affirmed
  • Connect people quickly
  • Create atomic assets that can be shared on various mediums

In doing so, we hope to create a better experience by bringing users to the great people and content that are already there, and encourage more great content development.

Posted in Inventing the Future, Software | Leave a comment

Inventing the Future

Discovery Part 1: The Issue

“What is there to do here?”


High Fidelity is a VR platform.

It’s pretty clear how to market a video game. It’s a little bit harder to connect users to a new VR chat room, conferencing app, or Learning Management System. We’re not making any of these, but rather a platform on which such apps can be made by third party individuals and companies. Once someone has our browser-like “Interface” software, people can connect to any app or experience on our platform — if they know where to go.

The tech press is full of stories about The Chicken and Egg problem: adoption requires interesting content, but content development follows adoption. Verticals such as gaming make that problem a little more focused, but games still require massive up-front investment in technology, content, and marketing. We are instead betting on user-generated content in both the early market and, as with the Internet in general, we expect user-generated content to be the big story in mainstream adoption as well. This makes it that much more important to hook users up with interesting people, places, and things in-world. The early Web used human-curated directories for news, financial info, games, and so forth, but we’re not quite sure what categories are going to have the most interesting initial experiences. And neither do our users!

We want an easy way for users to find interesting people, places, and things to explore, which doesn’t require High Fidelity Inc. to pick and decide what’s hot. We also want an easy way for creators to let others know about their content, without having to go through us nor a third party to market it.

Crowd Curation

One powerful model that has emerged for recognizing interest in user-generated content is crowd curation: a strong signal is produced by real user activity, and used to drive suggestions.

The signal can be explicit endorsement (likes, pins, tweets, and links) or implicit actions (achievements, or funnel actions such as visiting or building). The signals are weighted in favor of the users you value most: friends, strangers with lots of “karma”, or sites with lots of links to them.

There are various ways in which this information is then fed back to users. Facebook and Twitter provide a feed of interesting activity. Google offers suggestions as you type, and more suggestions after you press return. Amazon tells you at checkout that people who bought X also bought Y. However, the underlying crowd curation concept is roughly the same, and it has accelerated early growth (Twitter, Zynga), and ultimately provided enormous value to large communities and their users (Google, Facebook, eBay).

Of course, these systems don’t crawl High Fidelity virtual worlds, so we need to make our own crowd curation system, or find a way to expose aspects of our virtual world to the Web, or both. But more importantly, what do we want to share?

For Real

So, what should we suggest to our users? Ultimately, we want to suggest anything that will give a great experience: people to meet or catch up with, places to experience, and things from the marketplace to use wherever you are. But in these early days, your friends are not likely to have gear or to be online at any given moment. Places are under construction and without reputation. The marketplace is just forming.

Initially then, we’re starting with just two kinds of suggestions:


  1. Taking an in-world snapshot is something that any user can do from any place, and it puts participants onto the road to being content creators. The picture can be taken with the click of a button and requires no typing, which can be hard to do in an HMD. We automatically add the artist username and the place name as description. It often gives a pretty good idea of what’s happening, and we’ve arranged for clicking on the picture to transport you to the very place it was taken, facing the same way, so that you can experience it, too. Finally, it creates a nice visual artifact that you can take home and share outside of High Fidelity.



  1. Even without necessarily knowing another High Fidelity user, it’s definitely more fun to go to places where people are. Even though we’re just in beta, there are always a few places that have people gathered, but they’re not always the same places. It’s hard for a person to know where to look. So we’re making suggestions out of public places, ordered by the number of people currently in them. No need for anyone to do or make anything on this one, as we pick up concurrency numbers automatically from those domains that share this info. (Anyone who makes a domain can control access to it.)



These suggestions appear when you press the “Go To” button, which also brings up an address bar where you can directly enter a known person or place, or search for suggestions (just like a Web browser’s address bar). I can imagine someday offering information about related content in various situations, or a real time messaging and ticker widget for those who want to keep tabs on the latest happenings, but primarily we just want to allow people to “pull” suggestions when they are specifically looking for something to do.

In short, when a person presses the “Go To” button, they get a scrollable list of suggestions that give a visual sense of what has been happening recently, which offers people the chance to visit.


Suggestions are available to both anonymous and logged in users: we don’t want to require a login to use High Fidelity. However, we would like to offer personalized feeds in the future based on your (optional) login. We also don’t share anything that you have not explicitly shared, and such sharing links to your (self-selected, non-real-world) username.

Sharing and searching are not restricted to our system. Every suggestion has a public Web page that can be shared on Facebook or Twitter, or (soon) searched on Google and other search engines. Clicking the picture or link on that page in a browser brings you to that same place in-world if you have Interface installed, just as if you had clicked on the suggestion within Interface. We feel this will make it easier for content creators to promote their places, snapshots, and eventually, marketplace items. We hope to create a “virtuous circle”, in which search and sharing brings people in through external networks that are much bigger than ours, introduces them to more content, and makes it easy for them to further make and share.

Does It Matter?

In the two weeks before we introduced an early form of this, a bit more than a third of our beta users were within 10 meters of another user in-world on any given day (excluding planned gatherings). Then we introduced a prototype of the concurrency suggestions (“N people are hanging out in some place name”), and over the next two weeks, nearly half each day’s users were near another a some point in their day. Since then, we’ve done other things to increase average concurrency, and we’re now near 100%.

I don’t have good historical data on snapshots, and the new data is quite volatile. Our private alpha “random image thread” averaged a healthy five entries a day for more than two years, including entries with no pictures and entries with multiple pictures. Now, on days when something interesting is happening, we get 20 or 30 explicitly shared pictures, with most days generating three to eight.

Next: How we do that

Posted in Inventing the Future, Software | Leave a comment

Inventing the Future

Dude – Who brought the ‘script’s to the party?


This week, some of our early adopters got together for a party in virtual reality. One amazing thing is how High Fidelity mixes artist-created animations with sensor-driven head-and-hand motion. This is done automatically, without the user having to switch between modes. Notice the fellow in the skipper’s cap walking. His body, and particularly his legs, are being driven by an artist-created animation, which in turn is being automatically selected and paced to match either his real body’s physical translation, or game-controller-like inputs. Meanwhile, his head and arms are being driven by the relative position and rotation of his HMD and the controller sticks in his hands(*).

So, dancing is allowed.

But the system is also open to realtime customization by the the participants. Some of the folks at the party spontaneously created drinks and hats and such during the party and brought them in-world for everyone to use. A speaker in the virtual room was held by a tiny fairy that lip-sync’d to the music. One person brought a script that altered gravity, allowing people to dance on the ceiling.


*: Alas, if the user is physically seated at a desk, they tend to hold their hand controllers out in front of them. You can see that with the purple-haired avatar.

Posted in Inventing the Future, meta-medium | Comments closed

Tales of the Sausage Factory

FCC Tells You About Your Phone Transition — Y’all Might Want To Pay Attention.

I’ve been writing about the “shut down of the phone system” (and the shift to a new one) since 2012. The FCC adopted a final set of rules to govern how this process will work last July. Because this is a big deal, and because the telecoms are likely to try to move ahead on this quickly, the FCC is having an educational event on Monday, September 26. You can find the agenda here.


For communities, this may seem a long way off. But I feel I really need to evangelize to people here the difference between a process that is done right and a royal unholy screw up that brings down critical communication services. This is not something ILECs can just do by themselves without working with the community — even where they want to just roll in and get the work done. Doing this right, and without triggering a massive local dust-up and push-back a la Fire Island, is going to take serious coordinated effort and consultation between the phone companies and the local communities.


Yes, astoundingly, this is one of those times when everyone (at least at the beginning), has incentive to come to the table and at least try to work together. No, it’s not going to be all happy dances and unicorns and rainbows. Companies still want to avoid spending money, local residents like their current system that they understand just fine, and local governments are going to be wondering how the heck they pay for replacement equipment and services. But the FCC has put together a reasonable framework to push parties to resolve these issues with enough oversight to keep any player that participates in good faith from getting squashed or stalled indefinitely.


So, all you folks who might want to get in on this — show up. You can either be there in person or watch the livestream. Monday, September 26, between 1-2 p.m. For the agenda, click here.


Stay tuned . . .

Posted in General, PSTN Transition, Series of Tubes, Tales of the Sausage Factory | Comments closed

Tales of the Sausage Factory

Cleveland and the Return Of Broadband Redlining.

I am the last person to deny anyone a good snarky gloat. So while I don’t agree entirely with AT&T’s policy blog post taking a jab at reports of Google Fiber stumbling in deployment, I don’t deny they’re entitled to a good snarky blog post. (Google, I point out, denies any disappointment or plans to slow down.) “Broadband investment is not for the feint hearted,”


But the irony faeries love to make sport. The following week National Digital Inclusion Alliance (NDIA) had a blog post of their own. Using the publicly available data from the FCC’s Form 477 Report, NDIA showed that in Cleveland’s poorest neighborhoods (which are also predominantly African American), AT&T does not offer wireline broadband better than 1.5 mbps DSL – about the same speed and quality since they first deployed DSL in the neighborhood. This contrasts with AT&T’s announcement last month that it will now make its gigabit broadband service available in downtown Cleveland and certain other neighborhoods.


Put more clearly, if you live in the right neighborhood in Cleveland, AT&T will offer you broadband access literally 1,000 times faster than what is available in other neighborhoods in Cleveland. Unsurprisingly for anyone familiar with the history of redlining, the neighborhoods with crappy broadband availability are primarily poor and primarily African American. Mind you, I don’t think AT&T is deliberately trying to be racist about this. They are participating in the HUD program to bring broadband to low-income housing, for example.


There are two important, but rather different issues here — one immediate to AT&T, one much more broadly with regard to policy. NDIA created the maps to demonstrate that a significant number of people who qualify for the $5 broadband for those on SNAP support that AT&T committed to provide as a condition of its acquisition of DIRECTV can’t get it because the advertised broadband in their neighborhood is soooo crappy that they fall outside the merger condition (the merger requires AT&T to make it available in areas where they advertise availability of 3 mbps). Based on this article from CNN Money, it looks like AT&T is doing the smart thing and voluntarily offering the discount to those on SNAP who don’t have access to even 3 mbps AT&T DSL.


The more important issue is the return of redlining on a massive scale. Thanks to improvements the FCC has made over the years in the annual mandatory broadband provider reporting form (Form 477), we can now construct maps like this for neighborhoods all over the country, and not just from AT&T. As I argued repeatedly when telcos, cable cos and Silicon Valley joined forces to enact “franchise reform” deregulation in 2005-07 that eliminated pre-existing anti-redlining requirements – profit maximizing firms are gonna act to maximize profit. They are not going to spend money upgrading facilities if they don’t consider it a good investment.


Again, I want to make clear that there is nothing intrinsically bad or good about AT&T. Getting mad at companies for behaving in highly predictable ways based on market incentives is like getting mad at cats for eating birds in your backyard. And while I have no doubt we will see the usual deflections that range from “but Google-“ to “mobile gives these neighborhoods what they need” (although has anyone done any actual, systemic surveys of whether we have sufficient towers and backhaul in these neighborhoods to provide speed and quality comparable to VDSL or cable?) to “just wait for 5G,” the digital inequality continues. I humbly suggest that, after 10 years of waiting and blaming others, perhaps we need a new policy approach.


More below . . .

Read More »

Posted in Series of Tubes, Tales of the Sausage Factory | Tagged , , , , | 1 Comment (Comments closed)

Inventing the Future

Feeding Content

Our latest High Fidelity Beta release builds on June’s proof of concept, which suggested three visitable places above the address bar. Now we’re extending that with a snapshot feed. This should assist people in finding new and exciting content, and seeing what’s going on across public domains.
Just The Basics:

I. There is now a snapshot button in the toolbar: It works in HMD, and removes all HUD UI elements from the fixed aspect-ratio picture. If you are logged in to a shareable place, you also get an option to share the snapshot to a public feed. (Try doing View->Mirror and taking a selfie!)


II. The “Go To” address bar now offers a scrollable set of suggestions that can be places or snapshots: The two buttons to the right of the address bar switch between the two sets, and typing filters them. Clicking on a place takes you to that named place, but clicking on a snapshot opens another window with more info. You can then visit the place that snapshot was taken by clicking on the picture, explore the other snapshots taken by that person or in that place, or share the picture to Facebook if you choose. If your friends follow your share to the picture on the Web, they can click on the picture to jump to the same place – if they have Interface installed.


(None of this has anything to with our old Alpha Forums picture feed, which isn’t public or scalable, nor are there changes to the old control-s behavior.)
Where We’re Headed:

There’s a lot more we can do with this, but we wanted to release what we have now and find out what’s important to you.

  1. We’re also thinking about other activity and media you might like to share and see in the feed, such as joining a group or downloading from marketplace.
  2. How might we use the “wisdom of crowds” to score and order the suggestions, based on real activity that people find useful?
  3. The community is quite small right now, and often your real world or social media friends do not have HMDs yet. So for now there there’s just one shared public feed of snapshots. As we grow, we’ll be looking at scaling our infrastructure, and with it, more personalized sharing options.

As we move forward:

  • We don’t want to require a login to use High Fidelity or to enjoy the suggestions made by the feed. We do require a login to share, and we’d like to offer personalized feeds in the future based on your (optional) login.
  • We don’t want to require connecting your High Fidelity account to any social media, but we do want to allow you to do so.
  • We don’t want to share anything without you telling us that it is ok to do so.
Posted in Inventing the Future, meta-medium | Comments closed

Tales of the Sausage Factory

Can Obama Stop The Stalling On Clinton Appointees. Or: “It’s Raining Progressives, Hallelujah!”

As we end 2016, we have an unusually large number of vacancies in both the executive branch and the judiciary.  As anyone not living under a rock knows, that’s no accident. Getting Obama appointments approved by the Senate was always a hard slog, and became virtually impossible after the Republicans took over the Senate in 2015.  This doesn’t merely impact the waning days of the Obama Administration. If Clinton wins the White House, it means that the Administration will start with a large number of important holes. Even if the Democrats also retake the Senate, it will take months to bring the Executive branch up to functioning, never mind the judiciary. If Clinton wins and Republicans keep the Senate, we are looking at continuing gridlock and dysfunction until at least 2018 and possibly beyond.


In my own little neck of the policy woods, this plays out over the confirmation of Federal Communications Commissioner Jessica Rosenworcel (D). Rosenworcel’s term expired in 2015. Under 47 U.S.C. 154(c), Rosenworcel can serve until the end of this session of Congress. That ends no later than Noon, January 3, 2017, according to the 20th Amendment (whether it ends before that, when Congress adjourns its legislative session but remains in pro forma session is something we’ll debate later). Assuming Rosenworcel does not get a reconfirmation vote (although I remind everyone that Commissioner Jonathan Adelstein was in a similar situation in 2004 and he got confirmed in a lame duck session), that would drop the Commission down to 2-2 until such time as the President (whoever he or she will be) manages to get a replacement nominated and confirmed by the Senate. Given the current Commission, this would make it extremely difficult to get anything done — potentially for months following the election. It would also force Chairman Tom Wheeler to remain on the Commission (whether he wants to or not) for some time.


From the Republican perspective, however, this has advantages. If Clinton wins, it means that the FCC is stuck in neutral for weeks, possibly months. Since Republicans generally do not like Wheeler’s policies, that’s just fine. By contrast, if Trump wins, Republicans will have an immediate majority if Wheeler follows the usual custom and steps down at Noon January 20. So even though Republicans promised to confirm Rosenworcel back in 2014 when the Ds allowed Commissioner Mike O’Reilly (R) to get his reconfirmation vote, they have plenty of reasons to break their promise and hold Rosenworcel up anyway. Not that Senate Republicans have anything against Rosenworcel, mind you. It’s just (dysfunctional) business.


Again, it’s important to remind everyone who obsesses about communications that this is not unique to Rosenworcel. From Merrick Garland (remember him?) on down, we have tons of vacancies just sitting there without even the virtue of a bad excuse beyond “well, we’d rather the government not function if someone on the other side is running it.” While I keep hoping this will change, I don’t expect either political party to have a change of heart around this following the next election.


Fortunately, I have a plan so cunning you can stick a tail on it and call it a weasel.  On the plus side, if I can get the President to go along with it, it will not only keep things working between January 3, Noon, and January 20, Noon. It will also give the Republicans incredible incentive to move Clinton’s nominations as quickly as possible. On the downside, it’s not entirely clear this is Constitutional. I think it is, based on the scanty available case law (mostly Nat’l Labor Relations Bd v. Canning). But, as with test cases generally, I can’t guarantee it. Still, like the idea of preventing a U.S. default on its debt with a trillion dollar platinum coin, it can’t hurt to think about it.


For the details of what I call “Operation Midnight At Noon” (throwback to the Midnight Judges), see below . . .

Read More »

Posted in "A Republic, if you can keep it", How Democracy Works, Or Doesn't, Life In The Sausage Factory, Tales of the Sausage Factory | Comments closed

Tales of the Sausage Factory

Ninth Circuit Knee-Caps Federal Trade Commission. Or: “You Know Nothing, Josh Wright.”

Back in October 2014, before the Federal Communications Commission (FCC) reclassified as Title II, both the FCC and the Federal Trade Commission (FTC) brought complaints against AT&T Mobility for failure to disclose the extent they throttled “unlimited” customers once they passed a fairly low monthly limit. You can see the FCC Notice of Apparent Liability (NAL) here. You can see the FTC complaint, filed in the district court for Northern California, here (press release here). As some of you may remember, the FCC was still debating whether or not to reclassify broadband as a Title II telecom service.  Opponents of FCC reclassification (or, indeed, of any FCC jurisdiction over broadband) pointed to the FTC enforcement action as proof that the FTC could handle consumer protection for broadband and the FCC should avoid exercising jurisdiction over broadband altogether.


In particular, as noted in this Washington Post piece, FTC Commissioner Maureen Olhausen (R) and then-FTC Commissioner Joshua Wright (R), both vocal opponents of FCC oversight of broadband generally and reclassification specifically, tweeted that the FTC complaint showed the FTC could require broadband providers to keep their promises to consumers without FCC net neutrality rules. Wright would subsequently reiterate this position in Congressional testimony, pointing to the FTC’s enforcement complaint under Section 5 of the Federal Trade Commission Act (FTCA) (15 U.S.C. 45) as an “unfair and deceptive” practice to prove that the FTC could adequately protect consumers from potential harms from broadband providers.


Turns out, according to the Ninth Circuit, not so much. As with so much the anti-FCC crowd asserted during the net neutrality debate, this turns out (pending appeal) to be dead wrong. Why? Contrary to what some people seem to think, most notably the usual suspects at Cable’s Team Rocket (who are quoted here as saying “reclassifying broadband means the FTC can’t police any practices of common carriers, at least in the Ninth Circuit” which is either an utterly wrong reading of the case or an incredibly disingenuous remark for implying that reclassification had something to do with this decision. You can see their full press release, which borders on the Trump-esque for its incoherence, here.)


As I explain below, the Ninth Circuit’s decision did not rest on reclassification of broadband. To the contrary, the court made it explicitly clear that it refused to consider the impact of reclassification because, even assuming mobile broadband was not a Title II service, AT&T Mobility is a “common carrier” by virtue of offering plain, ordinary mobile voice service (aka “commercial mobile radio service,” aka CMRS). The Ninth Circuit agreed with AT&T that because AT&T offers some services as common carrier services, AT&T Mobility is a “common carrier” for purposes of Section 5(a)(2) of the FTCA and thus exempt from FTC enforcement even for its non-common carrier services.


Given that Tech Freedom and the rest of the anti-FCC gang wanted this case to show how the Federal Trade Commission could handle all things broadband, I can forgive — and even pity — Tech Freedom’s desperate effort in their press release to somehow make this the fault of the FCC for reclassifying and conjuring an imaginary “gap” in broadband privacy protection rather than admit Congress gave that job to the FCC. After all, denial is one of the stages of grief, and it must come as quite a shock to Cable’s Team Rocket to once again see that Team PK-chu was right after all (even if it doesn’t make me particularly happy that we were, for reasons I will explain below). But this is policy, not therapy.  As of today, instead of two cops on the beat for broadband consumer protection access, we have one — the Federal Communications Commission. Fortunately for consumers, the FCC has been taking this job quite seriously with both enforcement actions and rulemakings. So while I consider it unfortunate that Ninth Circuit has cut out the FTC on non-common carrier related actions by companies offering a mix of common carrier and non-common carrier services, the only people who need to panic are Tech Freedom and the rest of the anti-FCC crowd.


OTOH, longer term, this does create a more general concern for consumer protection in more deregulated industries (such as airlines) covered by the exemptions in Section 5 of the FTCA. Yes, I know most folks reading this blog think the universe revolves around broadband, but this decision impacts airlines, bus services, private mail services like UPS, and any other company offering a common carrier service “subject to the Acts to regulate Commerce.” (15 U.S.C. 45(a)(2))  (Also meat packers and a few other named exceptions). So while I am hopeful the FTC appeals this to the full Ninth Circuit for en banc review (and even the Supreme Court, if necessary) from a general consumer protection perspective, the only direct result of this case for broadband policy is to underscore how important it is for the FCC to do its job despite the industry nay-sayers and their Libertarian cheerleaders.


More below . . .


Read More »

Posted in Series of Tubes, Tales of the Sausage Factory | Tagged , , , , , , , , , | Comments closed

Tales of the Sausage Factory

Farewell To AT&T’s Jim Cicconi.

It may seem odd for me to say, and meaning no offense to his replacement Bob Quinn, but I am sorry to see Jim Cicconi retire from AT&T at the end of this month. For those who don’t play in this pond, Cicconi has been AT&T’s Lobbyist in Chief here in D.C. since 2005. It may therefore seem odd that I am sorry to see him go, particularly since Cicconi was so damned good at his job. But, as I have said many times before, I’m not here because companies are evil, nor do I believe the people working for them necessarily delight in crushing consumers, strangling puppies and tossing destitute widows and orphans on the street in rags in the dead of winter. (At least not in telecom, the copyright folks, on the other hand, were ready to screw over the blind a few years back just for giggles. But I digress . . .).



More below . . .

Read More »

Posted in Life In The Sausage Factory, Tales of the Sausage Factory | 3 Comments (Comments closed)

Tales of the Sausage Factory

Update on Muni Broadband Decision. The Fate of Pinetop, N.C.

Last week, I wrote about the 6th Circuit’s decision in the muni broadband caseTN v. FCC. I mentioned in passing that the opinion pretty much keeps the status quo. Then I found from a reader about Pinetop, N.C.


As reported here and here, Greenlight, the muni provider of Wilson, N.C., took advantage of the FCC’s 2015 Order and began offering gigabit broadband in Pinetop, population 1400. Pinetop lies in Edgecomb County, next door to Wilson County. Under the 2010 N.C. anti-muni law, Greenlight could serve anyone in Wilson County but not go outside Wilson County to neighboring Edgecomb  County. But Wilson decided to take a shot and honor Pintetop’s request to provide service (Greenlight already provides electric service in Pinetop as a muni electric provider, so it wasn’t much of a leap).


The legal situation on this is now somewhat complicated. The 6th Cir. had not stayed the FCC’s preemption order in 2015, so it was totally legal for Greenlight to offer service. What is unclear now is how to read NC law now that it is “un-preempted” by the Sixth Circuit overturning the FCC. I admit I have no idea how to even begin to answer this question.


But it’s not an abstract legal question. The availability of broadband in Pinetop matters a great deal to the people of Pinetop.


Stay tuned . . . .

Posted in Cable, How Democracy Works, Or Doesn't, Life In The Sausage Factory, Series of Tubes, Tales of the Sausage Factory | 1 Comment (Comments closed)
  • John Sundman’s Books

    Acts of the Apostles

    Acts of the Apostles

    A thriller about nanomachines, neurobiology, Gulf War Syndrome, and a Silcon Valley messiah.

    "...a book infused with a sensibility that you don't normally expect a 'hard science fiction' novel to have: real emotions, real heartbreak and a real sense of the craziness at the core of the human condition."

    —Andrew Leonard,

    Buy or Download

    Cheap Complex Devices

    Cheap Complex Devices

    An anthology of the winners of the inaugural Hofstadter Prize for Machine-Written Narrative, with a preface by the editor and an introduction by the Hofstadter Prize Committee

    "Cheap Complex Devices is astonishing, on just about every level a book can be astonishing."

    —Rusty Foster,

    Buy or Download

    The Pains

    The Pains

    In Freemerica, where Orwell's 1984 is fused with Ronald Reagan's 1984, a young monk tries to save the world from disintegration.

    "All three of Sundman's books are somewhere between excellent and brilliant. ... The Pains touches upon the key issues of our time: it is a book which is philosophical to the point of being mystical."

    —Michael Allen, Grumpy Old Bookman

    Buy or Read Online

    Creation Science

    Creation Science

    conspiracy, duplicity, double-crosses, dispensational Christian fascism, misunderstandings, confusions, car crashes, megalomaniacal villains (in and out of government), explosions, gunplay, Russian Mafias, neuroscience, coincidence, mysterious islands not far from Cape Cod, information theory, love, regret, remorse, nostalgia and sex.

    Published by late spring 2010, if not sooner. Pre-orders much appreciated.

    Pre-order Now

  • Connect With Us

    Follow Wetmachine on Twitter!

    Email Updates

    [subscribe2 link='Subscribe with just your email address'] To get updates for all Wetmachine posts. Want more control? Log in using your Wetmachine account (or with your Facebook, Twitter, or other social media account), or register for a Wetmachine account.


If you do not have an account: Register