Sunday, May 26, 2013

SC284: On Dallas Willard's Homegoing

Yesterday, I attended the Memorial Service for Dallas Willard at The Church On the Way.  I don't know whether the video will be made available online at any point.

***

J. P. Moreland spoke about Dallas' philosophical work.  He talked about his calling to be a light at USC; the enormous notebooks where he stored up notes and thoughts on metaphysics and epistemology; his commitments to the rigorous application of reason and to the defense of metaphysical realism; his great humility when facing intellectual opponents; and his steadfast non-cooperation with being bullied.

There's so much there that inspires but also challenges me.  I don't want to commit myself to metaphysical realism just because that's what Dallas would do.  But at the same time, I wonder about the extent to which I may have acquiesced or abdicated to an anti-realist or constructivist metaphysics without having really tested any of these options.  It's hard to stick your neck out and defend a controversial position--not just about God or religion, but about anything.  Given how much debate is part of what philosophers do, and my own very reserved temperament, I sometimes wonder at my decision to go into this field.  Of course Dallas shows that one doesn't have to be argumentative in order to succeed in philosophy.  But lots of study, rigorous thought, and a commitment to following reason are required.

***

Dallas' granddaughter also spoke.  She shared one of Dallas' last words to her before he passed.  As she was preparing to leave his hospital room at the end of one day, he beckoned her over and told her, "Give 'em heaven."  What a great word.  Of course, Dallas was full of great words.  And he had such a gentle way of offering them to you that you couldn't help but receive them, and before you knew it, they were already at work inside you.

***

One of the things that was emphasized over and over about Dallas, and I've heard this from others  as well, was his way of being with people.  He was always attentive to those who talked with him.  "It was like I was the only person in the room."  There was an un-hurriedness to his manner and posture and attitude.

That's one of the main things that I want to work on in my own life.  I want to be open to people.  I want to be able to stop what I'm doing and attend to someone who is in need.  But I find that everything else I'm working on and just the guarded attitude I have toward my time keeps me from that.  You can sense this in yourself.  When you find yourself walking past someone on the sidewalk, what is your reaction.  Do your eyes flit toward their face or away from their face?  Do you smile at them or assume a blank expression?  Does your pace quicken or slow?  Is your body oriented toward or away from them?  Do you volunteer a greeting or wait for them to acknowledge you?  Is your greeting warm and inviting, or cold and dismissive?  Are you excited about what contact with a stranger might bring to your day or are you fearful of what a stranger might require of or take away from you?  And part of what may be startling, if you just reflect on this for a while, is that all of these contrasts are very, very subtle.  The difference I'm pointing to is not a huge one.  It's not the difference between being polite and being rude; it's the difference between being open and being politely closed.

How would it be if we really could abandon all thoughts of ourselves--of our welfare, of our security, of our reputation--knowing that we really have been richly provided for in God's kingdom?  Learning to lead a life with that sort of texture is no simple thing.  There are no cookie-cutter formulae that will give us the desired results each and every time.  No substitutes or alternatives will suffice.  But while there is no cookie-cutter formulae, there is a reliable process, a long and rich knowledge-tradition that many have successfully looked to for guidance in how to live this different kind of life.  And people like Dallas remind us that such a life is, indeed, available to us all.

***

In a short essay entitled, "Living in the Vision of God," Dallas wrote: "When you go to Assisi, you will find many people who talk a great deal about St. Francis, many monuments to him, and many businesses thriving by selling memorabilia of him.  But you will not find anyone who carries in himself the fire that Francis carried.  No doubt many fine folks are there, but they do not have the character of Francis, nor do they do the deeds of Francis, nor have his effects."  The ministry of Dallas Willard has been so influential and impactful--in my life as in many others.  There's so much more that has been said and could be said and should be said.  But, perhaps more important than any words of praise that we might offer concerning him, may it never be said of those of us who have been impacted by the teaching and ministry of Dallas Willard, or who have claimed the name of his teacher and savior, Jesus Christ, that we utterly lacked his character, deeds, or effects.

Tuesday, May 14, 2013

SC283: The Limits of Computers

Aeon Magazine contributor, Steven Poole, posted a great piece on the problems with entrusting more and more of our lives to computers.


I don't know how far-fetched worries about the powers of computers and algorithms seem to people today.  But I do think those fears are increasingly warranted, as technology progresses.

Poole provides many examples of computers being used or tested to drive cars, filter content on the Internet, rank search-engine results, predict recidivism (i.e. predict the likelihood that a convicted criminal will reoffend), target retail promotions, disseminate college course and professional training material, conduct business and stock trading transactions.  That computers play such key roles in important areas of our private and public lives will probably be old news to most.  But it's still the sort of thing that needs to be watched carefully.  Because computers have demonstrably done so much to improve the quality of our lives, and since they are capable of performing a large number of tasks so much more efficiently than human beings do, it might seem to naturally follow that our lives would be better all-way-round if only computers were running everything.

And even if we haven't completely bought into these utopian visions, we still need to be careful about handing over more limited and restricted tasks to the computers.  The fact that a computer performs a particular task more efficiently than a human agent is not the only factor to take into consideration.  And we should be even more cautious when the 'task' is complex and multi-faceted.  Poole provides a number of interesting examples of computer 'errors' that have already surfaced and that (if not corrected) could have serious and far-reaching negative consequences for the legal and judicial systems, the development of human culture and civilization, the quality of education, and the preservation of liberal democracy and personal freedom.  But he doesn't try to diagnose the source of those problems.

Of course, whenever a problem does crop up in an algorithm--when a program malfunctions and begins spewing out bad data or bad results, software engineers will go in and 'diagnose' the problem.  They will then develop a patch or rewrite the software or add some additional sub-routines that will correct for the identified error and ensure that the program's outputs fall within the expected parameters.  When Poole or I worry about these problems that are already cropping up in the powerful computer systems that exist today, we're not worrying about the same thing that the software engineers are worried about.  They will tackle each individual problem as it arises and create a fix for it.  But what I want to know is whether there is something common to all of these errors.  And as the stakes get higher and higher for each subsequent 'error'--as more and more of our lives become dependent on computers functioning well--I want to know whether it's possible to make any helpful predictions about the kinds of errors that might crop up in the future.

I'd like to suggest that, when thinking about the limitations of computers and algorithms, there are two key ideas to keep in mind.  These ideas are simple enough that just about anyone can understand them, but I think they also capture very deep features of computers.

1) There are significant limitations on the extent to which any computer simulation (of a physical system or decision procedure, for instance) can ever match the corresponding real phenomenon.

The kind of computer that poses these sorts of worries is the kind that makes decisions--or the kind that simulates decision-making.  Whether it's a program that plays chess, or that recommends movies or books to you (based on your buying and browsing history), or that drives a car, what all of these have in common is that they simulate decision-making procedures.  Sometimes its uncanny how 'perceptive' and 'discerning' these programs can be.  But it's important not to be taken in.  The computer doesn't actually understand what it's doing.  The computer is not making its 'decisions' based on any sort of knowledge or skill or understanding.  It's decision-making is only a simulation.

To grasp the difference I'm trying to highlight between the 'simulation' and the 'real thing,' I like to point to the example of computer-animated films.  Toy Story, Finding NemoThe Incredibles--how are these films made?  Well, most of the character animation is created by manipulating digital puppets.  This is often done frame-by-frame.  A character's pose and expression is set for each moment in the film.  But then there's all the set-dressing in the background, the environmental factors and atmospherics, the lighting, the characters' hair and clothes--how is all of that put together?  Much of it is generated by sophisticated computer programs that simulate those features of the character's environment.  So, for instance, if the wind is blowing through a character's hair in one scene, or that character is running through another scene, the animator will not 'animate' the hair and clothing frame by frame.  Rather, software engineers develop programs that simulate the behavior of hair and clothing.  The animator will set the positions of the puppet in that scene, and then the program will take care of making the hair and clothing 'look right'.

If you watch the special features on just about any computer-animated film, there will be a section in which the hair and clothing (and other) simulations are discussed.  And it turns out that it's extremely challenging to develop those simulations.  Of course the main challenge (and ultimate goal) for these software designers is to get the hair and clothing to 'look right'.  The film viewer knows what real hair and clothing look like, and how they appear in different environments (when it's windy, when it's humid, when it's soaking wet). The animated versions of clothing and hair need to act sufficiently like the real thing; otherwise they will be distracting to the viewer.  But while a traditional 2-D animator would just have to draw the hair and clothing a certain way (and that's often hard enough), the software engineers for these 3-D films have to actually create 3-D simulated hair.  Programs for animated hair have to keep track of the behavior of thousands, even millions, of individual hairs.  Each one of those hairs must be able to interact with the other hairs, the character's body, and the larger environment in a way that--when all the pieces are put together--looks realistic to the viewer. Each hair needs to be able to clump and separate, wave through the air, hang, twirl, etc., etc.  And guess what?  It's really hard to get all those properties into the hair and clothing.  Many computer generated films will come with blooper reels that show some of the amusing images that come from running these not-yet-perfected programs.

The results are funny, when we're only dealing with hair and clothing simulations.  And the stakes are very low, when the programs are only trying to get the hair and clothing to behave in a way that looks realistic to a film viewer.  But what if, instead of simulating hair and clothing, some computer is trying to simulate the decision-making process of a correctional psychologist.  And what if at stake is whether or not a convicted criminal is granted an early release from prison for good behavior.  Suddenly the stakes go way up.  And if it's tough for programmers to simulate the behavior of physical systems like hair and clothing, shouldn't we expect that it would be even more difficult to simulate the decision-making procedures of people in complex and sensitive situations.

Of course there's data out there that suggests that human decision-making processes are actually badly contaminated by all sorts of emotional and non-rational factors.  Some people think that handing over decision-making to computers would provide for greater objectivity, reliability, and consistency.  But even granting that point, notice what that involves.  In order to program a computer to make decisions for us, a programmer has to assume that she's able to enumerate all of the relevant factors.  That's a bold assumption to make--especially when the stakes are so high.  And the programmer also has to be able to successfully simulate sensitivity to all those factors.

Again, remember: computers don't understand what they're doing.  A CGI hair simulator doesn't understand anything about hair.  And a decision-making simulator doesn't understand anything about the decisions that it's making or the outputs that it's generating.  And most likely, (I would suspect in all cases, but I can't say for sure) the actual computational processes that generate that hair simulation or generate that decision-output bear no resemblance and only the most tenuous connection to the features that make real hair behave as it does or to the processes that human decision-makers go through.

Consider: what is the standard for measuring the success of a CGI hair simulator?  Answer: the output--what the final product looks like visually.  That's the only standard.  And what is the standard for measuring the success of a decision-making simulator?  Answer: the output--whether or not the computer spits out the 'right' results.  Whether such a simulator is 'good' depends only on the outputs and not on the processes involved.  To get a sense of how this might be problematic, imagine a simulator that was designed to predict the positions the planets in the night sky, as they would appear to someone standing at a particular point on earth.  If the simulator were programmed in such a way as to take the elliptical orbits of the planets into account when generating its predictions, we could expect that it would generate very accurate predictions.  However, it would also be also possible to generate equally accurate predictions using a system that was premised on the planets moving in only circular orbits.  Something like this was actually done by scientists working before Kepler.  The system is more complicated; it requires the introduction of epicycles into the model, but it works just as well as the other system at generating the right outputs.  We could imagine programmers learning, one day, that the epicycle-simulator was generating predictions that deviated from the actual facts.  How would they respond.  They might respond just by creating a patch, or writing an additional subroutine, that would correct for the error and get the outputs back on track.

Now the planets simulator case might be seen as offering a basis for hope that as programmers of other simulators (including decision-making simulators) progress, they will actually develop simulations that not only get the 'right' results more consistently but that also do simulate more and more accurately the actual mental processes that human decision-makers use.  That's a possibility that I can't rule out in principle.  However, before becoming too hopeful, we should remember that there is a big difference between simulating physical systems and simulating decision-making processes.  And if our primary criteria for testing and evaluating simulations is whether it generates the right output in a large-enough percentage of test cases, then I think we're missing something important.  Does that mean that we shouldn't even try to generate these sorts of programs?  Not necessarily, but as the stakes increase and as the complexity of the tasks-to-be-performed increases, we need to keep in mind that any errors will come at an increasingly high cost.  Raising the issue of the complexity of the task-to-be-performed will lead to our talking about the second point.

2) Computers work well for accomplishing very particular tasks that are well-defined and for which a specific and limited number of rules can be clearly articulated.  Computers work less well when they are designed to accomplish a wide variety of tasks--especially when the boundaries of these various tasks are not well-defined and how the tasks interrelate cannot be captured by clearly articulated rules.

To help us think about this, I will refer to one of those sci-fi artificial intelligence apocalypse stories: the 2004, action film: i, Robot.  Inspired by Isaac Asimov's short stories, i, Robot, explores precisely the sort of thing that I'm worrying about in this blog post.  The envisioned world is one in which personal robots have become thoroughly integrated into society.  They've become so sophisticated and are so completely trusted and accepted that they are given the responsibility of caring for children.  They do drive cars and manage cities.  And the idea that a robot could commit a crime or harm a person is almost impossible for the average citizen to fathom.

This great confidence is based, in large part, on the public's trust in the three laws of robotics--hardwired into every robot and completely inviolate.  The first law of robotics states: "A robot may not injure a human being, or through inaction, allow a human being to come to harm."  But a series of apparent and inexplicable 'malfunctions' lead the movie's main character, Del Spooner (played by Will Smith) to realize that there is a basic problem with these laws.  By the end of the film, this string of malfunctions are traced back to a powerful computer, VIKI, who, in order to carry out the first (and most fundamental) law of robotics, attempts to seize control of the entire planet, to impose martial law, in order to prevent all possible injury or harm to human beings.

The question this raises is, what went wrong.  The first law of robotics states, "A robot may not injure a human being, or through inaction, allow a human being to come to harm."  If you were creating a line of extremely powerful robots, that seems like just the sort of rule you'd want to hardwire into them, to override any subsequent programming that could possibly lead to human injury.  And yet VIKI manages to take this plausible-sounding principle and use it to justify a global take-over.  The principle seems right, but it has this bad result.  How can we make sense of it?

Interestingly, the film's writers suggest that VIKI's basic problem is that she is 'all head and no heart.'  The world is saved, in part, by one robot whose programmer designed it to feel emotion--which ability required that it also be able to override the three laws.  In effect the film says that it's better to be imperfectly rational and have a heart, than be perfectly rational and have no heart.  The lack of confidence in rationality that is expressed in the film is interesting.  The truth is that imperfect rationality and a heart can be just as deadly--maybe even more so--than pure rationality.  I think that the film's creators actually misdiagnosed the problem with VIKI.

VIKI's problem is not that she is 'all head and no heart.'  What is wrong with VIKI is that her programming has expanded and is no longer carefully circumscribed by clear rules.  
Computers work best for accomplishing very particular tasks that are well-defined and for which a specific and limited number of rules can be clearly articulated.  Now one of the rules--in fact the most important rule--that VIKI has states, "A robot may not injure a human being, or through inaction, allow a human being to come to harm."  What that law is supposed to express is a limitation or constraint on what VIKI can accomplish.  One might expect that a robot, so-constrained, would be created with a particular and limited set of tasks in mind: manufacture cars, cook dinner, clean the house, take care of the lawn, filter my e-mail messages, walk the dog, escort the kids to their friends' house.  The problem for VIKI, within the world of the film, is that her boundaries and tasks are not well-defined, and so she interprets the first law of robotics, not as a limitation or constraint, but as her primary mandate or objective--roughly, do not injure human beings and prevent as many human injuries as possible.  And if accomplishing that task means taking over the world, curtailing freedoms, violating liberty; so be it.

Another way to understand VIKI's problem is in terms of what she understands or doesn't understand about human beings.  Obviously human beings created VIKI in order to serve them--in order to make their lives easier by accomplishing certain tasks more cleanly and efficiently.  But what does VIKI 'understand' about human beings and her relationship to them.  The most basic thing that VIKI knows about human beings is that they are to be preserved and protected.  That is the content of the first law of robotics.  VIKI does not understand that human beings are autonomous and creative beings.  She does not understand that for human beings a life of enslavement may be less desirable than death.  She doesn't understand anything about human values.  And how could she?  She was never designed to understand those things.  She was only designed to perform a few carefully delimited tasks.  And as long as she was limited to those few tasks, this lack of understanding posed no problem.  But once VIKI developed to the point where her influence was not carefully limited and constrained, that lack of understanding manifested itself with alarming results.  So, again, VIKI's problem is not that she lacks emotion, but that she lacks understanding.  When a computer's tasks are very limited and circumscribed, that lack of understanding is not a problem.  But if a computer's tasks are unlimited and open-ended, that lack of understanding, I expect, becomes much more of a liability.

Let's see if I can illustrate my point using an example that doesn't come from science fiction.  Think about a program that recommends books.  It looks at your browsing and purchasing history, identifies patterns, and recommends books that have a similar contend and subject matter, or that were purchased by people with a similar browsing and purchasing history.  Now if one's goal is to read more books like those that I already enjoy, a computer program like this is a great tool for accomplishing that.  But there are other goals that one might have, related to reading, that wouldn't be so well served by such a program: the goal of reading books that challenge my accepted beliefs or the goal of reading books that have been influential in the history of ideas.  A program that works by analyzing your browsing and purchasing history may not be helpful for that.  Now someone who is reflective would not be in danger of confusion here.  Someone who is reflective will know what the recommending program is for and only use it with the one particular goal in mind.

But what will hold for the person who is less reflective?  Could someone make the mistake of thinking that the most culturally influential and important books are those with the most 'hits'?  Could someone unwittingly abdicate control of the content of culture to the machines?  Handing over this kind of culture-forming power to computers might be fine if we could program a computer that was actually sensitive to all of the relevant factors.  But that doesn't look like it's going to happen any time soon.  And the programs that we currently have--many of them are fine, but only for accomplishing limited and carefully defined tasks.  The worry is just that our enthusiasm about the power of computers will cause us to overlook the difference.


***

That's more of a reflection than a thorough-going and rigorous analysis.  But hopefully it's helpful.  Like I said earlier, Poole does a nice job of illustrating the problem, but he doesn't try to diagnose the problem.  Plenty of people have written stuff on the diagnostic side.  If what I've written intrigues you, you should follow up by exploring that stuff more.

Thursday, May 9, 2013

SC282: Dallas Willard Passes

Dallas Willard, philosophy professor at USC and renowned Christian author and speaker, passed away yesterday, 08-May, at the age of 77.  It was bittersweet to learn the news, knowing that his passing is a great loss to the Church in this world, but also recognizing that the words of the Apostle Paul are true: "[T]o live is Christ, and to die is gain."

I'll say more about my own thoughts and impressions of Dr. Willard in a later post.  In the meantime, here are some links to tributes, as well as to his own website (where many of his articles on both philosophy and the Christian life can be found), and to the website for the Dallas Willard Center for Spiritual Formation.

Tribute, Christine A. Scheller.
Tribute, John Ortberg.
Tribute, Richard Foster.

Dallas Willard's website.

Dallas Willard Center for Spiritual Formation, Westmont College.


***

Lead, kindly Light, amid th'encircling gloom, lead Thou me on!
The night is dark, and I am far from home; lead Thou me on!
Keep Thou my feet; I do not ask to see
The distant scene; one step enough for me.

...

So long Thy power hath blest me, sure it still will lead me on.
O'er moor and fen, o'er crag and torrent, till the night is gone,
And with the morn those angel faces smile, which I
Have loved long since, and lost awhile!

Meantime, along the narrow rugged path, Thyself hast trod,
Lead, Savior, lead me home in childlike faith, home to my God.
To rest forever after earthly strife
In the calm light of everlasting life.


-- John Henry Newman

Wednesday, May 8, 2013

SC281: Shrinking Government Raises Unemployment???

Here's an article published yesterday in The Atlantic:


It's always jarring to realize how completely different someone's basic outlook on the world can be from one's own.

I definitely lean toward small government.  People on the opposite end of the spectrum tend to highlight the positive effects that large government policies have had and point out the failures of supposedly small government policies to produce the benefits that its advocates are supposed to have claimed for it.  (That sentence was a mouthful.)  On the one hand, I don't have a firm grasp of the 'hard' evidence that either side is supposed to be citing in support of its respective position.  And on the other hand, I have a probably-overly-developed sense of skepticism toward all such appeals.  When jumps and dips in numbers are presented as definitively demonstrating that one side's or the other's policies were correct, I start to really wonder about the interpretations of that data.  Economics is such an enormously complicated field, and there are so many factors and variables that play into any particular economic event, it's hard for me to accept that analysts could know for certain that a particular policy was primarily responsibility for a particular change.  Of course my skepticism is probably the greatest toward those who hold the opposite view.

Locating myself within the spectrum of views is of limited usefulness.  Confessing my general ignorance is great, if it marks the starting point of a quest for hard data to support my postion.  We'll see what I find as time goes on.  The point of this piece is just to highlight the sharp divergence in views and, hopefully, prompt some reflection on this.

Derek Thompson writes, "[I]t's intuitive that expansionary public spending (including on people) following a private sector meltdown are useful to help the economy catch up to trend-line growth."

Really?  That's not intuitive to me at all.  Well... maybe I should nuance that more.  After all, there's quite a bit in that statement and in the larger article that I don't understand.  If our goal is to maintain "trend-line growth," then I suppose expansionary public spending following a private sector meltdown would be useful--at least in the short-term.  But my mind immediately jumps to questions like, Should trend-line growth be the bottom line?  While expansionary public spending might be useful for bolstering trend-line growth in the short-term, is it a wise long-term policy?  Are there long-term negative effects of the government swooping in to the rescue?

Certainly unemployment and its impact on individuals and societies are bad things.  But that doesn't mean that just any approach to "creating jobs" will solve the problem.  Would a society with 0% unemployment, in name only, be better than a society with a "published" or "official" unemployment rate of 11%?  Lots of questions.  Very few concrete answers at this point.  But I'll try to start thinking about this in a more public way.

Tuesday, May 7, 2013

SC280: Søren Kierkegaard - Happy Belated 200th Birthday!

So I found out that 05-May, was the 200th birthday of Søren Kierkegaard.  Maybe it's time to revisit some of his writings and open for the first time some of his other books .  Here's a piece from Aeon Magazine to mark the occasion:

"I Still Love Kierkegaard," by Julian Baggini.