The 5 Questions That American Politics Must Resolve

What if I told you that we could solve a lot of problems in American politics if we answered just 5 questions? What if I told you that we could have a much more peaceable, civilized, and rational discourse about our government if we got to the bottom of those questions? Would you be interested in hearing them?

Here’s the catch: these 5 questions are really hard. In fact, I myself do not know the answer to any of them. But if Americans are going to make any progress, we must grapple with these questions because what really divides people goes much deeper than any particular candidate or even any particular political party. So even though I doubt any of us can find a neat answer to these questions, I am presenting them here for you to consider.

1. How Responsible Are Criminals For Their Own Actions?

We know that about half of all prison inmates have a mental health diagnosis. The new chief of mental health for New York City’s prison system has stated that nearly every patient she saw had experienced a broken and abusive childhood. Not everybody in the prison system is mentally ill, but it is clearly a significant component in many crimes.

On the other hand, the vast majority of mentally ill people simply suffer with their condition without hurting anybody else. And a lot of these mentally ill inmates have committed horrible crimes. Their victims deserve some measure of justice, too.

So at what point do we recognize prisoners are often products of their background while at the same time holding them accountable for their actions?

2. How Much of Our Earnings Do We Deserve (and How Much Did We Really Earn?)

We know from various studies in that in the United States, the best predictor of your personal success is the background of your parents. If your parents went to college, you probably will too. If they went to a Top 10 College, your chances of getting into one dramatically increases.

Of course, most would say that doing well in school has to do with character traits. But how inherent are those traits? Am I a hard worker because that’s the way I am or is it because I was lucky enough to be born to parents who taught me the value of hard work? If my intelligence is at least partially due to my genetics, then is it fair for me to earn more money as a result?

If my wealth is due at least in part to luck and the privileged background I was born into, then is it ok to tax my income and give it to other people who are less fortunate?

3. Is Making More Information Available Always a Good Thing?

When we talk about the tug between hiding information or making it available, we are usually referring to government and corporate secrecy. The question of how much information the government should share with the public is its own thorny problem, but I’m asking a much broader question: does having more information always lead us to make better decisions?

Consider medical studies. You once had to go to a specialized library to read them, but now a lot of them are available online to anybody with an internet connection. That has undoubtedly improved a lot of lives, but it has also allowed people to selectively cite evidence in order to assert that vaccines cause autism.

I’m not saying we should institute a censorship regime or make information hidden from the public. Those would be terrible ideas. But whereas people once dreamed that the rise of the internet would lead to people making better decisions due to more widely available information, we now know that’s not true at all. Is it enough to just release information and trust people to figure it out? Do we need to also release interpretations to guide how people receive this information and think about it?

4. When Should We Let People Make Their Own Choices?

Your first reaction upon reading that question was probably, “Everybody should always be allowed to make their own choices. Duh.” But stop and think about that for a second. How far are you willing to take that belief?

We don’t allow people choose an attorney without a legal education to represent them in court. It doesn’t matter how much research the person has done, how carefully they have considered the ramifications of their decision, or how much time they’ve spent weighing costs and benefits. People are allowed to represent themselves in court, but if they are going to have a representative, that person must have a law degree and have good standing in the bar association. We’ve placed this restriction because an incompetent lawyer can do enormous damage to their client.

Similarly, we require that all prescription drugs should undergo clinical trials and be approved by the FDA. We don’t expect that patients will do their own research on a drug, find out its effects, and then make a decision for themselves about what medications to take.

There are certain kinds of complicated investments that members of the general public are not allowed to invest in. That’s because they are so complicated that most people can’t understand them and therefore they cannot properly understand the risk they are taking on.

But if that’s the case, then why do we allow people to choose their own health insurance policy? If you think about it, a health insurance policy is very much like a financial instrument. Choosing the ideal policy for yourself involves making a risk assessment about the future and then evaluating the monetary value of various insurance options to figure out what has the best chance of reducing your financial exposure. Do we really think most Americans are going to be good at making that decision?

Where I live, people can choose which company supplies electricity to them. You would think this is a good idea, but it has led to a lot of scams as power suppliers promise one thing and then hide a lot of costs in fine print. Is it really such a bad idea to just let some appointed panel of experts make that decision for us?

5. Do You Always Know What’s Best For Yourself?

Modern cognitive science tells us that humans are pretty bad at figuring things out. We are prone to react on an emotional level instead of figuring things out dispassionately. We make really weird risk assessments such as being frightened of terrorism or violent crime while not being terribly concerned about automobile accidents. When presented with a change that will make our lives better, we usually gripe about how that change disrupts our routine or makes our lives worse in one way while completely ignoring how that change makes us measurably better.

What’s even worse: we’re all hard-wired to think that we are completely rational. We don’t like to think of ourselves as relatively foolish animals who stumble around the world guided by instincts and primal emotions.

This is one reason why a lot of decisions in the American government is made by appointed experts and bureaucrats who are largely unknown to the public. When it comes time to decide whether we should adapt one particular broadcast spectrum standard or another, we don’t put that up to a vote. We let some duly appointed experts make that decision for us and mostly go along with it.

But if you take this too far, you end up with paternalism and policies such as banning large sugary drinks. There’s no doubt that a soda ban would make all of our lives better, but we have decided that as a moral principle, we won’t allow the government to make that decision.

So the question is where do we draw the line?

Health Insurance Nuts and Bolts 3: How To Pay Your Doctor

So you’re a doctor with your own practice. Congratulations! Time for you to treat some patients and pay down some of that student loan debt.

To make things easier, let’s suppose you are a dermatologist. You perform a very standard set of procedures with a very low rate of complications. In other words, for any procedure you perform on a patient, you have a pretty good idea of how much time it will take and how much medical equipment you are going to use.

After looking at your cost of living, the rent you pay for your office space, and the wages you pay your staff, you decide that you will perform an excision of a sebacious cyst for $1,000 (note: I’m completely making this number up. I have no idea what the procedure actually costs).

Well, that’s what you would charge somebody without insurance, anyway. The truth is the vast majority of your patients will actually have insurance which means when they need tohave a cyst excised, you are going to get your money from the insurance company, not from your patient.

So in comes Xantar Insurance and they make you an offer: whenever you perform a sebacious cyst excision on one of their members, Xantar will pay you $850. Why would you take this offer? Xantar tells you that they are a pretty big insurance group with thousands of members. On average, 100 of their members need to have a sebacious cyst excised every year. What Xantar is offering you is a discount in exchange for greater volume. After thinking about it, you might decide this is a fair deal and sign a contract. You are now a covered provider under Xantar Insurance.

The thing is Xantar probably isn’t the only insurance company in town. Maybe there’s another insurance company (let’s call it FFFreak Insurance) whose members only requires 70 excisions per year, but they’ll pay you $900 per procedure. You might be happy to accept that, too.

So now you are a dermatologist with three separate price points for the same procedure:

  • Members of Xantar Insurance can get their cyst excised for $850
  • Members of FFFreak Insurance will cost $900
  • People without insurance will have to pay $1,000

(By the way, remember that insurance companies are facing their own economic calculations which I described in Part 1)

We haven’t even gotten into public insurance programs like Medicare, Medicaid, and CHIP. All of them also pay different rates to doctors than private insurance. And remember, out of all these groups, only people without insurance actually know what a procedure costs. People who have insurance will pay a co-pay or some amount under a deductible and have the insurance company pay the rest—they have no idea what the procedure really costs.

This doesn’t just make the healthcare system very complicated. It also means that healthcare cannot function as a market. One of the first things people learn in Econ 101 is that a market only works when there’s price transparency and information symmetry (i.e. the price of everything must be clear, and everybody in the market must have the same information). Both of these are not true when it comes to paying doctors.

You can call up a retailer and ask what the price of a TV is, but you can’t call up a hospital and ask what the cost of an angiogram is. And that’s why in the American healthcare system, “Let the free market sort it out” will never work. Health care is not a functioning market in the first place.

Health Insurance Nuts and Bolts Part 2: Medical Loss Ratio

I was asked about administrative costs and overhead in my previous post about insurance, so I wanted to expand on it in a new post.

What he is referring to is called the Medical Loss Ratio (MLR). That’s the proportion of the insurance company’s revenue that is spent on actual medical care.

Let’s suppose that Xantar National Insurance collects $1.2 million and has to pay out $1 million to doctors, hospitals, therapists, and other care providers. This leaves $200,000 for Xantar to use towards worker salaries, marketing, overhead, and cocaine for the CEO. The medical loss ratio is the medical payouts divided by total revenue or in other words $1 million/$1.2 million = 83%.

That’s actually pretty good by many standards.

Before Obamacare, there was no regulation of the MLR. It wasn’t unheard of for an insurance company to have an MLR of 70% or even lower. 1 out of every 4 dollars in insurance premiums were going towards something other than actual health care.

Obamacare put in a regulation that said that insurance companies must maintain an MLR of at least 80% (some really big insurance companies had to hit 85%). This regulation applied to ALL insurance, not just insurance sold on Healthcare.gov. If an insurance company ended up with too much surplus, it had to refund the money to its members. Last year, insurance companies paid out about $2.4 billion to 8 million Americans. The healthcare industry is worth several trillion, so that’s not much in the grand scheme. But it’s something.

By the way, government programs like Medicare regularly achieve an MLR of 95% because they aren’t trying to make a profit, government workers aren’t paid as much as private sector workers, and the government doesn’t spend a lot on marketing. This is one reason why ideas like the Public Option or Medicare Buy-in scare conservatives and insurance executives so much. They cannot possibly compete.

Legal stuff: my healthcare posts are for public consumption and available to all. You may copy or share this post to wherever you like as long as you give me attribution.

Health Insurance Nuts & Bolts Part 1: The Basic Business

Want to understand how insurance companies do what they do? The math is complicated, but the core business is actually pretty simple. The insurance company’s revenue is the premiums that it collects from members. Its cost outlays are the payments it makes to hospitals, doctors, and service providers. When it collects more in premiums than it pays out, the insurance company makes money. And insurance companies want to make money.

Let’s illustrate this with an extremely simplified example. Suppose we form an insurance company called Xantar National Health. It has 1,000 members who each pay $100 per month ($1,200 per year). So Xantar collects $1.2 million every year.

Suppose heart bypass surgery costs $100,000. This means if more than 12 of Xantar’s members require heart bypass surgery, then Xantar will lose money (and that’s before we get into the salaries and overhead that Xantar pays employees). Xantar employs the best statisticians and public health researchers who will find out whether this is a reasonable risk.

But Xantar can also use a bunch of tricks to boost profitability. Older people usually need more medical care than younger people, so Xantar will charge higher premiums to older people. Women also use more medical care than men (pregnancy is expensive), so Xantar will charge higher premiums to them, too. And people with pre-existing conditions also usually use a lot of medical services, so Xantar will just flat-out refuse to offer them insurance. Even better, Xantar can write a clause into the insurance contract which says that there is a maximum amount of dollars it will spend on your medical care and then after that you’re on your own.

You may recognize these as practices which used to exist before Obamacare and are now either banned or heavily regulated. Basically, Xantar wants to have a membership that is made up of mostly young men who almost never go to the doctor, and it will do everything it legally can to get that population. Obamacare made that much harder, but Xantar isn’t going to stop trying to find ways to squeeze out more money.

For example, maybe there’s a doctor who will do heart bypass surgery for $90,000 instead of $100,000. So Xantar now declares that all members must use THIS doctor instead of any other doctor. This is why you now see that most insurance policies on the Obamacare exchange have a very narrow list of doctors that they cover.

Or Xantar could deliberately keep premiums very low in order to try to attract as many young people as possible. Then after a few years, Xantar jacks up the premiums. Sure, members will grumble, but most of them will stay because they won’t want to go to the trouble of switching to another insurance company. This is what actually happened when premiums jumped up in 2016: before then, insurance companies were keeping profits low or even taking a loss in order to build up their membership. 

The bottom line is insurance companies are trying to boost profits because that’s what they do. And that’s why a lot of people think the government should get into the healthcare business instead of private companies. Whatever else you may say about the government, it isn’t trying to make a profit. 

In a few days, I’ll write up a post to explain all the kinds of government healthcare systems there are around the world and how I think they would work in the United States.

Copyright: My posts about healthcare are free to the public. You may share or copy this wherever you like as long as you give me attribution.

Supergirl: Hero or The Devil Wears Lycra?

Our first look at Supergirl is here, and the reaction seems to have been decidedly mixed. On the one hand you have people who think it looks like fun, the special effects are pretty good, and the lead seems very appealing. On the other hand, a lot of people are disappointed or even angry that the show seems to be a light rom-com. Take a look at the trailer below and see for yourself. Then follow me below the fold for my thoughts.

Continue reading “Supergirl: Hero or The Devil Wears Lycra?”

Does Jewelry Get Outdated?

The Apple Watch has been fully unveiled with a launch date and price point. Lots of pixels are being spilled about whether it’s going to be successful, whether we should buy it, and whether wearables really are the future of technology. As I said last time I talked about the Apple Watch, it’s a device that I can see the utility of and that I certainly want. Just not now.

One of the big pieces of news coming out of the show yesterday was the price point ranging from $349 for the basic model to $10,000 for the limited edition 18k gold model.   Understandably, that bigger number has gotten a lot of attention. The gold Apple Watch Edition is clearly a product for the very wealthy who wouldn’t think much of dropping a year’s worth of car payments on a status icon.

Except the thing is the Apple Watch Edition is now essentially a piece of jewelry. And people usually buy jewelry as an investment or as a family heirloom that’s going to last decades or even generations. So how does that work with a piece of consumer electronics which will eventually grow obsolete?

It’s not that the Apple Watch will necessarily have a yearly lifecycle like other iOS devices do. By this time next year, the Apple Watch will still keep time, count how many steps you take, and talk to your phone. It won’t have whatever new features the Apple Watch 2 has, but it will still be viable.

But what about five years from now? Will an Apple Watch you buy today still be able to sync up and work properly with an iPhone 9? Is there going to be same way for Apple to switch out the innards and update the software? Or is the Apple Watch going to end up much like the iPhone 3g — ultimately disposable. I don’t know that the public is going to be convinced that a watch is something they can switch out every two years. It makes more sense to me that Apple will instead introduce new Watches to appeal to different segments (maybe a kids’ version or a version specifically designed for medical professionals).

These are all issues that I’m sure Apple has considered. It will be interesting to see what their solution is.

GamerGate Died Just Like Occupy Wall Street

It began as a seemingly spontaneous uprising, startling the establishment and gathering a popularity that nobody expected. Its participants were fueled by a righteous sense of justice and communicated with each other using social media in ways that observers hadn’t anticipated. For a while, they were all the talk of the town, and there were pundits in the media who openly speculated that this movement would lead to a permanent change. And then, slowly but surely, the momentum died down, the press started paying less attention, and the participants dispersed except for a dedicated core group. A few years later, hardly anybody talks about it, and no actual change has come out of all the noise and fury.

This was the story of Occupy Wall Street, and a few years from now, it will be the story of GamerGate. Not too long ago, the front pages of many traditional newspapers as well as blogs and social media were posting new stories about GamerGate on a daily basis. Now it’s already pretty clearly dead, having accomplished nothing that anybody can discern. The unofficial motto of the movement, “It’s about ethics in videogame journalism,” is now more likely to be used as a sarcastic punchline than a sincere wish. To understand what happened, I think it will be useful to look at how Occupy Wall Street started with a similar bang and then slowly collapsed in upon itself.

Continue reading “GamerGate Died Just Like Occupy Wall Street”

How Watches Killed Google Glass (and What It Says About Google’s Process)

Remember Google Glass? Remember the hype, the backlash, and all the jokes about how nobody looks good while wearing them? Whatever happened to those things anyway?

According to this Reuter’s article, Google Glass has basically just faded away with major developers dropping support and no launch date in sight. It may continue to be available in some form, but it’s looking more and more likely that a mass consumer launch is never going to happen.

Many pundits have noted that the problem with Google Glass was the way it was launched. It was more like an experiment which was given to a very limited audience with plans to expand the launch later. Without a clear launch date, however, no positive publicity came out and Google Glass essentially withered on the vine.

The thing is this is Google’s normal way of doing business. It’s not necessarily a bad way to go. I remember when Gmail was invitation-only, and it has grown into a smashing success even before it was attached to other Google services like calendar and productivity apps. But for every Gmail there has been a Google Plus or Google Wave — products which started out in limited release and never managed to get off the ground (yes, I’m aware that Google Plus still exists, and this blog even publishes a link to it on my Google Plus page. But does anybody really worry about it as a major player?).

Contrast the Google way with how most other companies launch their products. Most other companies will make some announcement about their new upcoming product, give a firm launch date, and then start marketing it with commercials and social media. Apple is a prime example of this model, but any other company including Samsung and Nintendo will do the same thing. You have to hope that the product actually works and does everything promised, and you don’t have the benefit of testing under real world conditions the way Google does with their products. But it also means that you can control the information that gets out and spin things positively for yourself before the launch rather than have all the flaws sitting out for the world to see.

The other thing that killed Google Glass was that smart watches solved some of the problems it was designed for. One of the draws of Google Glass was supposed to be the ability to look up information on the internet without having to pull out your phone all the time. Smart watches offer the same functionality while also not looking irreparably goofy. And they are available now. In that sense, the Apple Watch will be competition for Google Glass early next year. The one reason to get Google Glass now is if it delivers on the promise of being able to overlay information on the world (such as by translating signs written in foreign languages). But that capability seems very far away now. And therefore Google Glass has faded away to die ignobly.

It makes me wonder if Google might not be better served with a little more opacity. Did Google Glass really have to be announced to the world and tested out in the open? From where I sit, there’s no reason why Google Glass couldn’t have been tested and developed internally much the same way as the next iPhone or Galaxy Note is. Apple has surely had a lot of failed ideas and products which didn’t pan out. The difference is we don’t know about them.

Software Development is Becoming Like Journalism

I came across this article in the New York Times talking about the generation divide in Silicon Valley. It’s very well-written, and I recommend reading the whole thing. There was one part that was a bit tangential to the author’s main point, but it’s what really got me thinking:

There’s a glass-half-full way of looking at this, of course: Tech hasn’t been pedestrianized — it’s been democratized. The doors to start-up-dom have been thrown wide open. At Harvard, enrollment in the introductory computer-science course, CS50, has soared. Last semester, 39 percent of the students in the class were women, and 73 percent had never coded before. These statistics are trumpeted as a sign of computer science’s broadening appeal and, indeed, in the last couple of years the class has become something of a cult and a rite of passage that culminates in the CS50 fair, where students demo their final projects and wear T-shirts reading “I Took CS50.”

I myself have taken an introductory computer science course (at my school, it was simply CS-001). I am not a programmer or anything close to a computer engineer. But here I am running my own website which contains formatting and networking features you could only have dreamed of in the late 90s (I remember what it was like to code HTML manually). This stuff is no longer hard.

Writing software is no longer as obscure as it used to be. Don’t get me wrong. It still takes focused study and education. But contrary to what you might think, most computer programming these days consists of taking components or lines of code other people have written, tweaking them to fit your purpose, and then putting them together. And we’ve now gotten to the point where those off-the-shelf components are incredibly versatile, powerful, and easy to use. There are still companies out there building brand new things from scratch. But most development houses are building an app for a restaurant who wants to create smart menus or creating a system for hospitals to track their patients. Most developers now spend their time thinking about exactly what problem they want to solve or what feature they want to implement and not so much about exactly what piece of code they use to do it.

It’s similar in many ways to how journalism is more concerned with the ideas and stories you tell and not so much about whether you should have used the subjunctive voice on the third paragraph.

App development is not quite as easy as writing prose, but it’s getting closer. And while that’s a good thing overall, it’s worth thinking about what that will mean in the next few years.

Many of the startups that get the most buzz these days rely on building a network—people use them because other people use them. The most obvious examples are the ones that have a social component such as Pinterest, Instagram, and Spotify. But there are others such as Venmo, a system for paying your friends when you don’t have cash on hand. It only works if your friends also have Venmo.

The  future!
The future!

What this means is that many of the startups coming out of Silicon Valley are more about social engineering than they are about resolving a technical issue. The technology behind Venmo is nothing huge. The real difficulty is in convincing enough people to use it in much the same way as modern journalism these days has to convince enough people to read it.

And much like journalism, a lot of services coming out of Silicon Valley give away their product for free which means they rely on advertising for their revenue. I don’t think this is sustainable for journalism in the long run, and I don’t think it is for software development either.

So what to do? The main difference for me is that people care about the quality of the software they use much more than they care about the quality of the journalism they read (sad but true). This should mean that they are willing to pay for quality. And maybe the next bold step for a Silicon Valley startup is to dare to charge for their service.