Ten Lessons I wish I had been Taught (zz)


Gian-Carlo Rota
MIT, April 20 , 1996 on the occasion of the Rotafest

Allow me to begin by allaying one of your worries. I will not spend the next half hour thanking you for participating in this conference, or for your taking time away from work to travel to Cambridge.

And to allay another of your probable worries, let me add that you are not about to be subjected to a recollection of past events similar to the ones I've been publishing for some years, with a straight face and an occasional embellishment of reality.

Having discarded these two choices for this talk, I was left without a title. Luckily I remembered an MIT colloquium that took place in the late fifties; it was one of the first I attended at MIT. The speaker was Eugenio Calabi. Sitting in the front row of the audience were Norbert Wiener, asleep as usual until the time came to applaud, and Dirk Struik who had been one of Calabi's teachers when Calabi was an undergraduate at MIT in the forties. The subject of the lecture was beyond my competence. After the first five minutes I was completely lost. At the end of the lecture, an arcane dialogue took place between the speaker and some members of the audience, Ambrose and Singer if I remember correctly. There followed a period of tense silence. Professor Struik broke the ice. He raised his hand and said: "Give us something to take home!" Calabi obliged, and in the next five minutes he explained in beautiful simple terms the gist of his lecture. Everybody filed out with a feeling of satisfaction.

Dirk Struik was right: a speaker should try to give his audience something they can take home. But what? I have been collecting some random bits of advice that I keep repeating to myself, do's and don'ts of which I have been and will always be guilty. Some of you have been exposed to one or more of these tidbits. Collecting these items and presenting them in one speech may be one of the less obnoxious among options of equal presumptuousness. The advice we give others is the advice that we ourselves need. Since it is too late for me to learn these lessons, I will discharge my unfulfilled duty by dishing them out to you. They will be stated in order of increasing controversiality. 

Lecturing
Blackboard Technique
Publish the same results several times.
You are more likely to be remembered by your expository work.
Every mathematician has only a few tricks.
Do not worry about your mistakes.
Use the Feynmann method.
Give lavish acknowledgments.
Write informative introductions
Be prepared for old age.
 1 Lecturing    top

The following four requirements of a good lecture do not seem to be altogether obvious, judging from the mathematics lectures I have been listening to for the past forty-six years.
a. Every lecture should make only one main point The German philosopher G. W. F. Hegel wrote that any philosopher who uses the word "and" too often cannot be a good philosopher. I think he was right, at least insofar as lecturing goes. Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.

b. Never run overtime Running overtime is the one unforgivable error a lecturer can make. After fifty minutes (one microcentury as von Neumann used to say) everybody's attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute overtime can destroy the best of lectures.

c. Relate to your audience As you enter the lecture hall, try to spot someone in the audience with whose work you have some familiarity. Quickly rearrange your presentation so as to manage to mention some of that person's work. In this way, you will guarantee that at least one person will follow with rapt attention, and you will make a friend to boot.

Everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.

d. Give them something to take home It is not easy to follow Professor Struik's advice. It is easier to state what features of a lecture the audience will always remember, and the answer is not pretty. I often meet, in airports, in the street and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course, and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.

 
 2 Blackboard Technique    top

Two points.
a. Make sure the blackboard is spotless It is particularly important to erase those distracting whirls that are left when we run the eraser over the blackboard in a non uniform fashion.

By starting with a spotless blackboard, you will subtly convey the impression that the lecture they are about to hear is equally spotless.

b. Start writing on the top left hand corner What we write on the blackboard should correspond to what we want an attentive listener to take down in his notebook. It is preferable to write slowly and in a large handwriting, with no abbreviations. Those members of the audience who are taking notes are doing us a favor, and it is up to us to help them with their copying. When slides are used instead of the blackboard, the speaker should spend some time explaining each slide, preferably by adding sentences that are inessential, repetitive or superfluous, so as to allow any member of the audience time to copy our slide. We all fall prey to the illusion that a listener will find the time to read the copy of the slides we hand them after the lecture. This is wishful thinking.


3 Publish the same result several times    top

After getting my degree, I worked for a few years in functional analysis. I bought a copy of Frederick Riesz' Collected Papers as soon as the big thick heavy oversize volume was published. However, as I began to leaf through, I could not help but notice that the pages were extra thick, almost like cardboard. Strangely, each of Riesz' publications had been reset in exceptionally large type. I was fond of Riesz' papers, which were invariably beautifully written and gave the reader a feeling of definitiveness.
As I looked through his Collected Papers however, another picture emerged. The editors had gone out of their way to publish every little scrap Riesz had ever published. It was clear that Riesz' publications were few. What is more surprising is that the papers had been published several times. Riesz would publish the first rough version of an idea in some obscure Hungarian journal. A few years later, he would send a series of notes to the French Academy's Comptes Rendus in which the same material was further elaborated. A few more years would pass, and he would publish the definitive paper, either in French or in English. Adam Koranyi, who took courses with Frederick Riesz, told me that Riesz would lecture on the same subject year after year, while meditating on the definitive version to be written. No wonder the final version was perfect.

Riesz' example is worth following. The mathematical community is split into small groups, each one with its own customs, notation and terminology. It may soon be indispensable to present the same result in several versions, each one accessible to a specific group; the price one might have to pay otherwise is to have our work rediscovered by someone who uses a different language and notation, and who will rightly claim it as his own.


4 You are more likely to be remembered by your expository work    top

Let us look at two examples, beginning with Hilbert. When we think of Hilbert, we think of a few of his great theorems, like his basis theorem. But Hilbert's name is more often remembered for his work in number theory, his Zahlbericht, his book Foundations of Geometry and for his text on integral equations. The term "Hilbertspace" was introduced by Stone and von Neumann in recognition of Hilbert's textbook on integral equations, in which the word "spectrum" was first defined at least twenty years before the discovery of quantum mechanics. Hilbert's textbook on integral equations is in large part expository, leaning on the work of Hellinger and several other mathematicians whose names are now forgotten.
Similarly, Hilbert's Foundations of Geometry, the book that made Hilbert's name a household word among mathematicians, contains little original work, and reaps the harvest of the work of several geometers, such as Kohn, Schur (not the Schur you have heard of), Wiener (another Wiener), Pasch, Pieri and several other Italians.

Again, Hilbert's Zahlbericht, a fundamental contribution that revolutionized the field of number theory, was originally a survey that Hilbert was commissioned to write for publication in the Bulletin ofthe German Mathematical Society.

William Feller is another example. Feller is remembered as the author of the most successful treatise on probability ever written. Few probabilists of our day are able to cite more than a couple of Feller's research papers; most mathematicians are not even aware that Feller had a previous life in convex geometry.

Allow me to digress with a personal reminiscence. I sometimes publish in a branch of philosophy called phenomenology. After publishing my first paper in this subject, I felt deeply hurt when, at a meeting of the Society for Phenomenology and Existential Philosophy, I was rudely told in no uncertain terms that everything I wrote in my paper was well known. This scenario occurred more than once, and I was eventually forced to reconsider my publishing standards in phenomenology.

It so happens that the fundamental treatises of phenomenology are written in thick, heavy philosophical German. Tradition demands that no examples ever be given of what one is talking about. One day I decided, not without serious misgivings, to publish a paper that was essentially an updating of some paragraphs from a book by Edmund Husserl, with a few examples added. While I was waiting for the worst at the next meeting of the Society for Phenomenology and Existential Philosophy, a prominent phenomenologist rushed towards me with a smile on his face. He was full of praise for my paper, and he strongly encouraged me to further develop the novel and original ideas presented in it.


5 Every mathematician has only a few tricks    top

A long time ago an older and well known number theorist made some disparaging remarks about Paul Erdos' work. You admire contributions to mathematics as much as I do, and I felt annoyed when the older mathematician flatly and definitively stated that all of Erdos' work could be reduced to a few tricks which Erdos repeatedly relied on in his proofs. What the number theorist did not realize is that other mathematicians, even the very best, also rely on a few tricks which they use over and over. Take Hilbert. The second volume of Hilbert's collected papers contains Hilbert's papers in invariant theory. I have made a point of reading some of these papers with care. It is sad to note that some of Hilbert's beautiful results have been completely forgotten. But on reading the proofs of Hilbert's striking and deep theorems in invariant theory, it was surprising to verify that Hilbert's proofs relied on the same few tricks. Even Hilbert had only a few tricks!

6 Do not worry about your mistakes    top

Once more let me begin with Hilbert. When the Germans were planning to publish Hilbert's collected papers and to present him with a set on the occasion of one of his later birthdays, they realized that they could not publish the papers in their original versions because they were full of errors, some of them quite serious. Thereupon they hired a young unemployed mathematician, Olga Taussky-Todd, to go over Hilbert's papers and correct all mistakes. Olga labored for three years; it turned out that all mistake scould be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties. At last, on Hilbert's birthday, a freshly printed set of Hilbert's collected papers was presented to the Geheimrat. Hilbert leafed through them carefully and did not notice anything.
Now let us shift to the other end of the spectrum, and allow me to relate another personal anecdote. In the summer of 1979, while attending a philosophy meeting in Pittsburgh, I was struck with a case of detached retinas. Thanks to Joni's prompt intervention, I managed to be operated on in the nick of time and my eyesight was saved.

On the morning after the operation, while I was lying on a hospital bed with my eyes bandaged, Joni dropped in to visit. Since I was to remain in that Pittsburgh hospital for at least a week, we decided to write a paper. Joni fished a manuscript out of my suitcase, and I mentioned to her that the text had a few mistakes which she could help me fix.

There followed twenty minutes of silence while she went through the draft. "Why, it is all wrong!" she finally remarked in her youthful voice. She was right. Every statement in the manuscript had something wrong. Nevertheless, after laboring for a while, she managed to correct every mistake, and the paper was eventually published.

There are two kinds of mistakes. There are fatal mistakes that destroy a theory; but there are also contingent ones, which are useful in testing the stability of a theory.


7 Use the Feynman method    top

Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: "How did he do it? He must be a genius!"

8 Give lavish acknowledgments    top

I have always felt miffed after reading a paper in which I felt I was not being given proper credit, and it is safe to conjecture that the same happens to everyone else. One day, I tried an experiment. After writing a rather long paper, I began to draft a thorough bibliography. On the spur of the moment, I decided to cite a few papers which had nothing whatsoever to do with the content of my paper, to see what might happen.
Somewhat to my surprise, I received letters from two of the authors whose papers I believed were irrelevant to my article. Both letters were written in an emotionally charged tone. Each of the authors warmly congratulated me for being the first to acknowledge their contribution to the field.


9 Write informative introductions    top

Nowadays, reading a mathematics paper from top to bottom is a rare event. If we wish our paper to be read, we had better provide our prospective readers with strong motivation to do so. A lengthy introduction, summarizing the history of the subject, giving everybody his due, and perhaps enticingly outlining the content of the paper in a discursive manner, will go some of the way towards getting us a couple of readers.
As the editor of the journal Advances in Mathematics, I have often sent submitted papers back to the authors with the recommendation that they lengthen their introduction. On occasion I received by return mail a message from the author, stating that the same paper had been previously rejected by Annals of Mathematics because the introduction was already too long.


10 Be prepared for old age    top

My late friend Stan Ulam used to remark that his life was sharply divided into two halves. In the first half, he was always the youngest person in the group; in the second half, he was always the oldest. There was no transitional period.
I now realize how right he was. The etiquette of old age does not seem to have been written up, and we have to learn it the hard way. It depends on a basic realization, which takes time to adjust to. You must realize that, after reaching a certain age, you are no longer viewed as a person. You become an institution, and you are treated the way institutions are treated. You are expected to behave like a piece of period furniture, an architectural landmark, or an incunabulum.

It matters little whether you keep publishing or not. If your papers are no good, they will say, "What did you expect? He is a fixture!" and if an occasional paper of yours is found to be interesting, they will say, "What did you expect? He has been working at this all his life!" The only sensible response is to enjoy playing your newly-found role as an institution.

From Machine Learning to Machine Reasoning by Léon Bottou



This paper points out a new direction of machine learning. When building up bigger machine learning systems, probalistic modeling is not enough. The machine reasoning component should kick in. However, there is no sophisticated research on this. It makes machine reasoning with causality consideration as an important new direction.

Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong

By Lee Gomes
Posted

Randi Klett image - Michaeli Jordan Photo-Illustration: Randi Klett

The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool’s errand. Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.

Those may sound like the Luddite ravings of a crackpot who breached security at an IEEE conference. In fact, the opinions belong to IEEE Fellow Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley. Jordan is one of the world’s most respected authorities on machine learning and an astute observer of the field. His CV would require its own massive database, and his standing in the field is such that he was chosen to write the introduction to the 2013 National Research Council report “Frontiers in Massive Data Analysis.” San Francisco writer Lee Gomes interviewed him for IEEE Spectrum on 3 October 2014.

Michael Jordan on…

  1. Why We Should Stop Using Brain Metaphors When We Talk About Computing
  2. Our Foggy Vision About Machine Vision
  3. Why Big Data Could Be a Big Fail
  4. What He’d Do With US $1 Billion
  5. How Not to Talk About the Singularity
  6. What He Cares About More Than Whether P = NP
  7. What the Turing Test Really Means
  1. Why We Should Stop Using Brain Metaphors When We Talk About Computing

    IEEE Spectrum: I infer from your writing that you believe there’s a lot of misinformation out there about deep learning, big data, computer vision, and the like.

    Michael Jordan: Well, on all academic topics there is a lot of misinformation. The media is trying to do its best to find topics that people are going to read about. Sometimes those go beyond where the achievements actually are. Specifically on the topic of deep learning, it’s largely a rebranding of neural networks, which go back to the 1980s. They actually go back to the 1960s; it seems like every 20 years there is a new wave that involves them. In the current wave, the main success story is the convolutional neural network, but that idea was already present in the previous wave. And one of the problems with both the previous wave, that has unfortunately persisted in the current wave, is that people continue to infer that something involving neuroscience is behind it, and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.

    Spectrum: As a member of the media, I take exception to what you just said, because it’s very often the case that academics are desperate for people to write stories about them.

    Michael Jordan: Yes, it’s a partnership.

    Spectrum: It’s always been my impression that when people in computer science describe how the brain works, they are making horribly reductionist statements that you would never hear from neuroscientists. You called these “cartoon models” of the brain.

    Michael Jordan: I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

    Spectrum: In addition to criticizing cartoon models of the brain, you actually go further and criticize the whole idea of “neural realism”—the belief that just because a particular hardware or software system shares some putative characteristic of the brain, it’s going to be more intelligent. What do you think of computer scientists who say, for example, “My system is brainlike because it is massively parallel.”

    Michael Jordan: Well, these are metaphors, which can be useful. Flows and pipelines are metaphors that come out of circuits of various kinds. I think in the early 1980s, computer science was dominated by sequential architectures, by the von Neumann paradigm of a stored program that was executed sequentially, and as a consequence, there was a need to try to break out of that. And so people looked for metaphors of the highly parallel brain. And that was a useful thing.

    But as the topic evolved, it was not neural realism that led to most of the progress. The algorithm that has proved the most successful for deep learning is based on a technique called back propagation. You have these layers of processing units, and you get an output from the end of the layers, and you propagate a signal backwards through the layers to change all the parameters. It’s pretty clear the brain doesn’t do something like that. This was definitely a step away from neural realism, but it led to significant progress. But people tend to lump that particular success story together with all the other attempts to build brainlike systems that haven’t been nearly as successful.

    Spectrum: Another point you’ve made regarding the failure of neural realism is that there is nothing very neural about neural networks.

    Michael Jordan: There are no spikes in deep-learning systems. There are no dendrites. And they have bidirectional signals that the brain doesn’t have.

    We don’t know how neurons learn. Is it actually just a small change in the synaptic weight that’s responsible for learning? That’s what these artificial neural networks are doing. In the brain, we have precious little idea how learning is actually taking place.

    Spectrum: I read all the time about engineers describing their new chip designs in what seems to me to be an incredible abuse of language. They talk about the “neurons” or the “synapses” on their chips. But that can’t possibly be the case; a neuron is a living, breathing cell of unbelievable complexity. Aren’t engineers appropriating the language of biology to describe structures that have nothing remotely close to the complexity of biological systems?

    Michael Jordan: Well, I want to be a little careful here. I think it’s important to distinguish two areas where the word neural is currently being used.

    One of them is in deep learning. And there, each “neuron” is really a cartoon. It’s a linear-weighted sum that’s passed through a nonlinearity. Anyone in electrical engineering would recognize those kinds of nonlinear systems. Calling that a neuron is clearly, at best, a shorthand. It’s really a cartoon. There is a procedure called logistic regression in statistics that dates from the 1950s, which had nothing to do with neurons but which is exactly the same little piece of architecture.

     A second area involves what you were describing and is aiming to get closer to a simulation of an actual brain, or at least to a simplified model of actual neural circuitry, if I understand correctly. But the problem I see is that the research is not coupled with any understanding of what algorithmically this system might do. It’s not coupled with a learning system that takes in data and solves problems, like in vision. It’s really just a piece of architecture with the hope that someday people will discover algorithms that are useful for it. And there’s no clear reason that hope should be borne out. It is based, I believe, on faith, that if you build something like the brain, that it will become clear what it can do.

    Spectrum: If you could, would you declare a ban on using the biology of the brain as a model in computation?

    Michael Jordan: No. You should get inspiration from wherever you can get it. As I alluded to before, back in the 1980s, it was actually helpful to say, “Let’s move out of the sequential, von Neumann paradigm and think more about highly parallel systems.” But in this current era, where it’s clear that the detailed processing the brain is doing is not informing algorithmic process, I think it’s inappropriate to use the brain to make claims about what we’ve achieved. We don’t know how the brain processes visual information.

    Back to top

  2. Our Foggy Vision About Machine Vision

    Spectrum: You’ve used the word hype in talking about vision system research. Lately there seems to be an epidemic of stories about how computers have tackled the vision problem, and that computers have become just as good as people at vision. Do you think that’s even close to being true?

    Michael Jordan: Well, humans are able to deal with cluttered scenes. They are able to deal with huge numbers of categories. They can deal with inferences about the scene: “What if I sit down on that?” “What if I put something on top of something?” These are far beyond the capability of today’s machines. Deep learning is good at certain kinds of image classification. “What object is in this scene?”

    But the computational vision problem is vast. It’s like saying when that apple fell out of the tree, we understood all of physics. Yeah, we understood something more about forces and acceleration. That was important. In vision, we now have a tool that solves a certain class of problems. But to say it solves all problems is foolish.

    Spectrum: How big of a class of problems in vision are we able to solve now, compared with the totality of what humans can do?

    Michael Jordan: With face recognition, it’s been clear for a while now that it can be solved. Beyond faces, you can also talk about other categories of objects: “There’s a cup in the scene.” “There’s a dog in the scene.” But it’s still a hard problem to talk about many kinds of different objects in the same scene and how they relate to each other, or how a person or a robot would interact with that scene. There are many, many hard problems that are far from solved.

    Spectrum: Even in facial recognition, my impression is that it still only works if you’ve got pretty clean images to begin with.

    Michael Jordan: Again, it’s an engineering problem to make it better. As you will see over time, it will get better. But this business about “revolutionary” is overwrought.

    Back to top

  3. Why Big Data Could Be a Big Fail

    Spectrum: If we could turn now to the subject of big data, a theme that runs through your remarks is that there is a certain fool’s gold element to our current obsession with it. For example, you’ve predicted that society is about to experience an epidemic of false positives coming out of big-data projects.

    Michael Jordan: When you have large amounts of data, your appetite for hypotheses tends to get even larger. And if it’s growing faster than the statistical strength of the data, then many of your inferences are likely to be false. They are likely to be white noise.

    Spectrum: How so?

    Michael Jordan: In a classical database, you have maybe a few thousand people in them. You can think of those as the rows of the database. And the columns would be the features of those people: their age, height, weight, income, et cetera.

    Now, the number of combinations of these columns grows exponentially with the number of columns. So if you have many, many columns—and we do in modern databases—you’ll get up into millions and millions of attributes for each person.

    Now, if I start allowing myself to look at all of the combinations of these features—if you live in Beijing, and you ride bike to work, and you work in a certain job, and are a certain age—what’s the probability you will have a certain disease or you will like my advertisement? Now I’m getting combinations of millions of attributes, and the number of such combinations is exponential; it gets to be the size of the number of atoms in the universe.

    Those are the hypotheses that I’m willing to consider. And for any particular database, I will find some combination of columns that will predict perfectly any outcome, just by chance alone. If I just look at all the people who have a heart attack and compare them to all the people that don’t have a heart attack, and I’m looking for combinations of the columns that predict heart attacks, I will find all kinds of spurious combinations of columns, because there are huge numbers of them.

    So it’s like having billions of monkeys typing. One of them will write Shakespeare.

    Spectrum:Do you think this aspect of big data is currently underappreciated?

    Michael Jordan: Definitely.

    Spectrum: What are some of the things that people are promising for big data that you don’t think they will be able to deliver?

    Michael Jordan: I think data analysis can deliver inferences at certain levels of quality. But we have to be clear about what levels of quality. We have to have error bars around all our predictions. That is something that’s missing in much of the current machine learning literature.

    Spectrum: What will happen if people working with data don’t heed your advice?

    Michael Jordan: I like to use the analogy of building bridges. If I have no principles, and I build thousands of bridges without any actual science, lots of them will fall down, and great disasters will occur.

    Similarly here, if people use data and inferences they can make with the data without any concern about error bars, about heterogeneity, about noisy data, about the sampling pattern, about all the kinds of things that you have to be serious about if you’re an engineer and a statistician—then you will make lots of predictions, and there’s a good chance that you will occasionally solve some real interesting problems. But you will occasionally have some disastrously bad decisions. And you won’t know the difference a priori. You will just produce these outputs and hope for the best.

    And so that’s where we are currently. A lot of people are building things hoping that they work, and sometimes they will. And in some sense, there’s nothing wrong with that; it’s exploratory. But society as a whole can’t tolerate that; we can’t just hope that these things work. Eventually, we have to give real guarantees. Civil engineers eventually learned to build bridges that were guaranteed to stand up. So with big data, it will take decades, I suspect, to get a real engineering approach, so that you can say with some assurance that you are giving out reasonable answers and are quantifying the likelihood of errors.

    Spectrum: Do we currently have the tools to provide those error bars?

    Michael Jordan: We are just getting this engineering science assembled. We have many ideas that come from hundreds of years of statistics and computer science. And we’re working on putting them together, making them scalable. A lot of the ideas for controlling what are called familywise errors, where I have many hypotheses and want to know my error rate, have emerged over the last 30 years. But many of them haven’t been studied computationally. It’s hard mathematics and engineering to work all this out, and it will take time.

    It’s not a year or two. It will take decades to get right. We are still learning how to do big data well.

    Spectrum: When you read about big data and health care, every third story seems to be about all the amazing clinical insights we’ll get almost automatically, merely by collecting data from everyone, especially in the cloud.

    Michael Jordan: You can’t be completely a skeptic or completely an optimist about this. It is somewhere in the middle. But if you list all the hypotheses that come out of some analysis of data, some fraction of them will be useful. You just won’t know which fraction. So if you just grab a few of them—say, if you eat oat bran you won’t have stomach cancer or something, because the data seem to suggest that—there’s some chance you will get lucky. The data will provide some support.

    But unless you’re actually doing the full-scale engineering statistical analysis to provide some error bars and quantify the errors, it’s gambling. It’s better than just gambling without data. That’s pure roulette. This is kind of partial roulette.

    Spectrum: What adverse consequences might await the big-data field if we remain on the trajectory you’re describing?

    Michael Jordan: The main one will be a “big-data winter.” After a bubble, when people invested and a lot of companies overpromised without providing serious analysis, it will bust. And soon, in a two- to five-year span, people will say, “The whole big-data thing came and went. It died. It was wrong.” I am predicting that. It’s what happens in these cycles when there is too much hype, i.e., assertions not based on an understanding of what the real problems are or on an understanding that solving the problems will take decades, that we will make steady progress but that we haven’t had a major leap in technical progress. And then there will be a period during which it will be very hard to get resources to do data analysis. The field will continue to go forward, because it’s real, and it’s needed. But the backlash will hurt a large number of important projects.

    Back to top

  4. What He’d Do With $1 Billion

    Spectrum: Considering the amount of money that is spent on it, the science behind serving up ads still seems incredibly primitive. I have a hobby of searching for information about silly Kickstarter projects, mostly to see how preposterous they are, and I end up getting served ads from the same companies for many months.

    Michael Jordan: Well, again, it’s a spectrum. It depends on how a system has been engineered and what domain we’re talking about. In certain narrow domains, it can be very good, and in very broad domains, where the semantics are much murkier, it can be very poor. I personally find Amazon’s recommendation system for books and music to be very, very good. That’s because they have large amounts of data, and the domain is rather circumscribed. With domains like shirts or shoes, it’s murkier semantically, and they have less data, and so it’s much poorer.

     There are still many problems, but the people who build these systems are hard at work on them. What we’re getting into at this point is semantics and human preferences. If I buy a refrigerator, that doesn’t show that I am interested in refrigerators in general. I’ve already bought my refrigerator, and I’m probably not likely to still be interested in them. Whereas if I buy a song by Taylor Swift, I’m more likely to buy more songs by her. That has to do with the specific semantics of singers and products and items. To get that right across the wide spectrum of human interests requires a large amount of data and a large amount of engineering.

    Spectrum: You’ve said that if you had an unrestricted $1 billion grant, you would work on natural language processing. What would you do that Google isn’t doing with Google Translate?

    Michael Jordan: I am sure that Google is doing everything I would do. But I don’t think Google Translate, which involves machine translation, is the only language problem. Another example of a good language problem is question answering, like “What’s the second-biggest city in California that is not near a river?” If I typed that sentence into Google currently, I’m not likely to get a useful response.

    Spectrum:So are you saying that for a billion dollars, you could, at least as far as natural language is concerned, solve the problem of generalized knowledge and end up with the big enchilada of AI: machines that think like people?

    Michael Jordan: So you’d want to carve off a smaller problem that is not about everything, but which nonetheless allows you to make progress. That’s what we do in research. I might take a specific domain. In fact, we worked on question-answering in geography. That would allow me to focus on certain kinds of relationships and certain kinds of data, but not everything in the world.

    Spectrum: So to make advances in question answering, will you need to constrain them to a specific domain?

    Michael Jordan: It’s an empirical question about how much progress you could make. It has to do with how much data is available in these domains. How much you could pay people to actually start to write down some of those things they knew about these domains. How many labels you have.

    Spectrum: It seems disappointing that even with a billion dollars, we still might end up with a system that isn’t generalized, but that only works in just one domain.

    Michael Jordan: That’s typically how each of these technologies has evolved. We talked about vision earlier. The earliest vision systems were face-recognition systems. That’s domain bound. But that’s where we started to see some early progress and had a sense that things might work. Similarly with speech, the earliest progress was on single detached words. And then slowly, it started to get to be where you could do whole sentences. It’s always that kind of progression, from something circumscribed to something less and less so.

    Spectrum: Why do we even need better question-answering? Doesn’t Google work well enough as it is?

    Michael Jordan: Google has a very strong natural language group working on exactly this, because they recognize that they are very poor at certain kinds of queries. For example, using the word not. Humans want to use the word not. For example, “Give me a city that is not near a river.” In the current Google search engine, that’s not treated very well.

    Back to top

  5. How Not to Talk About the Singularity

    Spectrum: Turning now to some other topics, if you were talking to someone in Silicon Valley, and they said to you, “You know, Professor Jordan, I’m a really big believer in the singularity,” would your opinion of them go up or down?

    Michael Jordan: I luckily never run into such people.

    Spectrum: Oh, come on.

    Michael Jordan: I really don’t. I live in an intellectual shell of engineers and mathematicians.

    Spectrum: But if you did encounter someone like that, what would you do?

    Michael Jordan: I would take off my academic hat, and I would just act like a human being thinking about what’s going to happen in a few decades, and I would be entertained just like when I read science fiction. It doesn’t inform anything I do academically.

    Spectrum: Okay, but knowing what you do academically, what do you think about it?

    Michael Jordan: My understanding is that it’s not an academic discipline. Rather, it’s partly philosophy about how society changes, how individuals change, and it’s partly literature, like science fiction, thinking through the consequences of a technology change. But they don’t produce algorithmic ideas as far as I can tell, because I don’t ever see them, that inform us about how to make technological progress.

    Back to top

  6. What He Cares About More Than Whether P = NP

    Spectrum: Do you have a guess about whether P = NP? Do you care?

    Michael Jordan: I tend to be not so worried about the difference between polynomial and exponential. I’m more interested in low-degree polynomial—linear time, linear space. P versus NP has to do with categorization of algorithms as being polynomial, which means they are tractable and exponential, which means they’re not.

    I think most people would agree that probably P is not equal to NP. As a piece of mathematics, it’s very interesting to know. But it’s not a hard and sharp distinction. There are many exponential time algorithms that, partly because of the growth of modern computers, are still viable in certain circumscribed domains. And moreover, for the largest problems, polynomial is not enough. Polynomial just means that it grows at a certain superlinear rate, like quadric or cubic. But it really needs to grow linearly. So if you get five more data points, you need five more amounts of processing. Or even sublinearly, like logarithmic. As I get 100 new data points, it grows by two; if I get 1,000, it grows by three.

    That’s the ideal. Those are the kinds of algorithms we have to focus on. And that is very far away from the P versus NP issue. It’s a very important and interesting intellectual question, but it doesn’t inform that much about what we work on.

    Spectrum: Same question about quantum computing.

    Michael Jordan: I am curious about all these things academically. It’s real. It’s interesting. It doesn’t really have an impact on my area of research.

    Back to top

  7. What the Turing Test Really Means

    Spectrum: Will a machine pass the Turing test in your lifetime?

    Michael Jordan: I think you will get a slow accumulation of capabilities, including in domains like speech and vision and natural language. There will probably not ever be a single moment in which we would want to say, “There is now a new intelligent entity in the universe.” I think that systems like Google already provide a certain level of artificial intelligence.

    Spectrum: They are definitely useful, but they would never be confused with being a human being.

    Michael Jordan: No, they wouldn’t be. I don’t think most of us think the Turing test is a very clear demarcation. Rather, we all know intelligence when we see it, and it emerges slowly in all the devices around us. It doesn’t have to be embodied in a single entity. I can just notice that the infrastructure around me got more intelligent. All of us are noticing that all of the time.

    Spectrum: When you say “intelligent,” are you just using it as a synonym for “useful”?

    Michael Jordan: Yes. What our generation finds surprising—that a computer recognizes our needs and wants and desires, in some ways—our children find less surprising, and our children’s children will find even less surprising. It will just be assumed that the environment around us is adaptive; it’s predictive; it’s robust. That will include the ability to interact with your environment in natural language. At some point, you’ll be surprised by being able to have a natural conversation with your environment. Right now we can sort of do that, within very limited domains. We can access our bank accounts, for example. They are very, very primitive. But as time goes on, we will see those things get more subtle, more robust, more broad. As some point, we’ll say, “Wow, that’s very different when I was a kid.” The Turing test has helped get the field started, but in the end, it will be sort of like Groundhog Day—a media event, but something that’s not really important.

    Back to top

About the Author

Lee Gomes, a former Wall Street Journal reporter, has been covering Silicon Valley for more than two decades.

What is life about?

This afternoon, when I sat in front of my desk in my office, I suddenly felt that life is so beautiful. There are a lot of moments that we might feel so. But why most of time, we are tired, frustrated and sad?

What is life about?

I guess this question has pop out many times in most of people's head during their life time. However, there is no easy or unique answer. Every different people may have different opinions about it. 

For me, what is the answer?

I recognize that I always pursue something in life, and a lot of time, I am doing very well, but I am not happy. Why?

This has to do with how we think about life and how we handle our daily lives.

My ultimate dream in life is to be an excellent person. This is not saying I want to the smartest like Michael I. Jordan, or richest like Bill Gates, or most powerful like Jinping Xi in this world. Life contains a lot of randomness. Even their lives cannot be self-produced if you let them to re-start their life.

To be an excellent person, based on my thinking this afternoon, which might vary with time goes by, means 

1. To contribute positively to this world we live in.
    This means to provide positive energy to this world --- by helping others, by encouraging the people around us and we meet, by working hard and providing our intelligence in a reasonable way to this world.
     No matter work as an employee in a company, or a professor in a university, or a leader of a organization or company we have started, to work hard on the bussiness we are in, to help the place we are in, to embrace the shared benifits of all of us. On one side, this can ensure we have a reasonable materialized life so that we don't need to worry about money to raise our families; On the other side, we can fit our effort into the work which will be most of time to spend on. 

2. Be able to appreciate the beauty of life and of this world.
    There are so many beautiful things in our life, like the sunshine this afteroon, like the feeling of the wind pass by and kiss our skin, like the sunshine break into our office and projects on the blackboard, like the fantastic views outside our office window. We cannot ignore so many beautiful things. We should have the ability to appreciate them, to record them, to feel happy about it. 
    That's why we need to travel here and then, to see the world, to experience this world, to feel happy about them, let you smile flow on your face because of them. That's why we work hard, because we want to live in or travel to  beautiful places like Bay area California. We should take some time to feel grateful about the beautiful views we see, the wonderful people we meet, to bless them and ourselves, to feel happy about it. 

3. To create beauty and happyness for this world.
    To be a creative person, since we have the ability to sense this world, so we could create beauties by writing good article to share with people, to wrote poem to bring good feelings of life in to words, to learn take pictures and express ourself, to learn how to play good music, or write lyrics and music by ourselves. 
     At the same time, we make the environment around us better, we inspire people around us, we share good things with them, so that people can feel happy because of you. In this case, we should learn to be a positive person-- have good attitude towards life, cheer people up, let good friend to cheer you up, to joint them for happiness, to bring good feelings to people. We should be a nice person, be a trustable person to be a good ourselves. 

In these perspective, life is a bless, is a souce of small happiness that merge to the happpiness of the entire world. 

We make this world a better place by making us a better ourselves. That's why we set goals, strive for it --- even experience negative feelings and down time --- at the same time enjoy every day, enjoy the views we see, the music we hear, enjoy the talking to people, enjoy writing something, don't care too much about how people judge you, but judge on if this is consistent with your attitute about life. 



















練習進入專注的瞬間 – “The Zone” zz

在日本出差的的某個周末清晨,準備去海邊走走的我站在公車站牌旁,街上空蕩蕩的,手機不能上網,只好離線放空。一會兒,轉角處有一位日本老紳士走過來,行李箱拖在人行道上,劃破了飄在空氣裡的寧靜。老紳士走到公車站,收起行李支架,鋁合金的支架碰撞出清脆聲響。他從懷裡拿出手提包,劃開拉鍊,抽出車票,輕輕吐了口氣。

忽然我意識到,這幾個動作竟然伴隨著如此多的音效。

要不是在旅行途中,感官變得比平常敏銳,又剛好一時沒事做,否則我根本不會注意到這些細節。

 

兩種不同意義的專注

訪問十位成功人士關於成功的要訣,恐怕會聽到十一次「專注」,因為有人為了強調會多說一次。專注分成兩種,一種是專注於某項目標,就像〈釣大魚還是小魚〉裡提到的。另一種則是這次想和大家分享的——瞬間的專注。第一項專注需要充分的自我理解,比較接近於「理性的專注」。第二項專注,要求在一段時間內完全投入一項任務,它很難靠理性強迫,一個不小心,左手拇指跟無名指自動按下 alt + Tab 就會切換到臉書或 LINE。我認為這樣的專注更近似於一種修行,是「非理性的專注」。

非理性的專注在現代成了種奢侈。

當科技替人們帶來越來越多便利的同時,也在專注力的河堤上挖下了一道道溝渠,分散原本該流向目標的專注。我們將自己變成多核心的處理器,習慣同時處理多項工作,還有某些永遠開啟的常駐程式分散我們的注意力。有個很簡單的測試方法:閉上眼睛,試著什麼都不要思考。許多人可能會立刻發現這是很困難的,各式各樣的念頭,不斷在腦海中此起彼落,彷彿閉上眼後在流竄在眼皮底下的各色光影,永遠沒有歇止的一刻。

 

提升工作效率

旅行時我常覺得,抵達目的地的時間,總是比將各段交通時間加起來要長得許多。有幾次我仔細算了一下,發現其實是因為在計算時,我忽略了轉車、候車的零碎時間,而這些零碎的片段,最後卻吃掉了不容小覷的時間。工作上或許也如此,一整天下來,我們有多少時間在「轉車」、「候車」,真正專注在執行任務的時間恐怕不到一半。

更糟糕的是,旅行時我們很清楚自己是站在月台上,還是坐在車裡,但工作時,我們恐怕不清楚自己是處於認真工作的狀態,還是老早就分神了。用處理雜事的心態去面對工作不僅事倍功半,當兩種心態不斷交替時,專注的時間將被切得破碎。

破碎時間的傷害,遠超出我們的想像。

以前大學課堂上,有位教授曾說過

時間運用與產生的成果不是線性分布,而是指數分布。

用比較不數學的白話來舉例就是:一星期七天每天投入十分鐘;和一個上午連續投入一小時又十分鐘,後者得到的成果將遠比前者多上許多。

當然,不是每一件事都這樣,例如背單字這種記憶性的工作,每天十分鐘可能還比較有效。但倘若面對的是需要高度腦力的創造性工作,或是得解決困難的問題時,一次長時間的投入,才能看出成果。如同救援投手登板,得先在牛棚暖身,登上投手丘前試投幾球,才會漸漸進入狀況。人的意志力,也遵守運動定律,必須要先突破最大靜摩擦力才能開始運轉。破碎的時間連暖身都不夠,立刻又結束,對於解決困難的問題一點意義都沒有。但相對的,只要能持續專注,便能在暖機完成後,大幅提升工作效率。

作家侯文詠曾在臉書上寫過,跑馬拉松時,他常常想在最後幾公里停下來。但教練卻跟他說,跑者不僅不能在這時候放棄,還應該好好珍惜最後的幾公里,因為這幾公里的感覺、鍛鍊效果,是得先跑上幾十公里後才能擁有的,遠比一開始跑的那幾公里要珍貴許多。同樣道理,專注一份工作,絕對不是拔掉網路線,拍拍自己變胖的臉頰說「好開始工作了」就能做到的。

那是得經過一段時間醞釀、沉澱、專注投入後,才能進入的階段。

 

看見更多細節

專注不僅能提升做事情的效率,還可以讓人察覺平常會忽略的細節。

和人對話時,要是不想著怎麼接話,不玩手機,不偷聽隔壁桌情侶調情,將全部心思放在對方身上。假如做到了,你會頓時發現,他的臉部表情、舉手投足、語氣上下,時時刻刻都在變化,全都有關聯。這些細節,因為平常我們不夠專注便忽略了。有趣的是,察覺到這些細節的同時,又會錯過當下正在發生的細節

因為此時我們已經分心在咀嚼方才獲得的細節。真正的專注就像一根針,只能凝聚在一個瞬間,連獲得些甚麼,都得暫時擺在一旁。

應用在工作上,有些人看起來個性散漫,但工作時能看見別人沒注意到的細節,這不是他故意裝模作樣,或是《金田一少年之事件簿》看太多想學金田一,是因為他懂得如何瞬間專注。這就像運動員之間流傳的「The Zone」

身處在這領域裡,任何事物都變成慢動作,再複雜的事情都變得清楚,一切的一切攤開在眼前,鉅細靡遺。這一切,都是瞬間專注帶來的

 

鍛鍊專注力

要達到專注可能有一些技巧。好比說一件得重複執行的任務(例如校稿),第一次會最專心,之後專注力便會逐漸衰退。要是手邊有其他未完成的、會令人心煩的瑣事,比方說該回沒回的 E-mail,也會妨礙專注。環境影響很大,有些人適合絕對安靜的環境,適合一點背景聲音的人,則可以透過如 Coffeeitivtiy 的輔助。還有一些個人的方法和小習慣,有人會先吃點東西,有人習慣運動完、洗過澡後再開始,就我來說,我會先去洗眼鏡,因為開始專注時,我總是會先看見眼鏡上的灰塵,然後覺得它們很礙眼。

但最重要的還是練習。

鍛鍊專注就像瘦小腹一樣沒有捷徑,只能持之以恆、天天看鄭多燕。或許,我們可先從 20 分鐘開始,每天早中晚各挑 20 分鐘,隔絕外界聯繫,進入飛航模式,專注投入做一件事。習慣後再慢慢延長時間,每周增加 10 分鐘。如此一來,一個月後,妳每天就有三個小時,能擁有大多數現代人遺失、但卻是最珍貴的專注能力。

人格魅力 zz

我确实认识很多极富人格魅力的人,ta们坚强勇敢,自信成熟,对生活有着义无反顾的追求和百折不挠的勇气。世间万事人最苦,不过九死落尘埃。从某种意义上来说,我们的人生由于这些人的存在,而愈加精彩有趣,所以,我想这也是我们自己跃跃欲试,而想去成为他们的一种原因。
3.人格魅力,是一个很宽泛的概念,有人“十年饮冰,难凉热血”,有人”苟利国家生死以,岂因祸福避趋之“,有人”他年若遂凌云志,敢笑黄巢不丈夫?“这些人,也许他并不沉稳有趣,并不心细如发,不过,他们的人格魅力却丝毫不会有所减少,总之,活的真实而富有情趣,于人有利,于己有责,大概就很好了吧。
------------------------

豆瓣已经有很赞的答案了:

一:沉稳 
(1)不要随便显露你的情绪。 
(2)不要逢人就诉说你的困难和遭遇。 
(3)在征询别人的意见之前,自己先思考,但不要先讲。 
(4)不要一有机会就唠叨你的不满。 
(5)重要的决定尽量有别人商量,最好隔一天再发布。 
(6)讲话不要有任何的慌张,走路也是。 


二:细心 
(1)对身边发生的事情,常思考它们的因果关系。 
(2)对做不到位的执行问题,要发掘它们的根本症结。 
(3)对习以为常的做事方法,要有改进或优化的建议。 
(4)做什么事情都要养成有条不紊和井然有序的习惯。 
(5)经常去找几个别人看不出来的毛病或弊端。 
(6)自己要随时随地对有所不足的地方补位。 
三:胆识 
(1)不要常用缺乏自信的词句 
(2)不要常常反悔,轻易推翻已经决定的事。 
(3)在众人争执不休时,不要没有主见。 
(4)整体氛围低落时,你要乐观、阳光。 
(5)做任何事情都要用心,因为有人在看着你。 
(6)事情不顺的时候,歇口气,重新寻找突破口,就结束也要干净利落。 

四:大度 
(1)不要刻意把有可能是伙伴的人变成对手。 
(2)对别人的小过失、小错误不要斤斤计较。 
(3)在金钱上要大方,学习三施(财施、法施、无畏施) 
(4)不要有权力的傲慢和知识的偏见。 
(5)任何成果和成就都应和别人分享。 
(6)必须有人牺牲或奉献的时候,自己走在前面。 

五:诚信 
(1)做不到的事情不要说,说了就努力做到。 
(2)虚的口号或标语不要常挂嘴上。 
(3)针对客户提出的“不诚信”问题,拿出改善的方法。 
(4)停止一切“不道德”的手段。 
(5)耍弄小聪明,要不得! 
(6)计算一下产品或服务的诚信代价,那就是品牌成本。 

六:担当 
(1)检讨任何过失的时候,先从自身或自己人开始反省。 
(2)事项结束后,先审查过错,再列述功劳。 
(3)认错从上级开始,表功从下级启动 
(4)着手一个计划,先将权责界定清楚,而且分配得当。 
(5)对“怕事”的人或组织要挑明了说 

怎样提高自己的人格魅力? zz

谢 @喻忘忧邀。已经有很多人说了关于提高自身修养方面的内容了,我说点儿没人说的:如何能选择恰当的时机和方式来体现自己的人格魅力。

我看过一场演唱会,开演唱会的称之为歌手A吧。她请了三个嘉宾,歌手B、演员C、歌手D。

歌手A自己唱了一个多小时,中间插了几次talking,女王气场十足,讲话风趣幽默。然后请上歌手B。两个人各自坐下来,开始弹唱聊天。如果按在大陆的知名度算,B可能还比A更有名些。但B全程微笑,态度谦和,各种赞扬歌手A,弹吉他给A伴奏。

这时演员C出现了,显得有点仓促,她解释说因为刚从片场下来,所以晚到了,然后各种“嘤嘤嘤讨厌啦”,小撒娇小羞涩。她来了以后,一起加入聊天,话题就转移到了她的身上,看得出来在平时的私交中,她应该也是小公主的角色。

在B、C都已经退场以后,D才出现,而且她是从观众席后面进来的。这个场地是livehouse,所有人都挤在一起站着。A邀请她上台,她说她是来预祝A新专辑大卖的,就不上来了,希望观众们和她一起唱一首歌献给A。然后她爆出今天其实是A的生日,全场观众齐唱了祝你生日快乐,气氛很温馨感动。唱完D就say goodbye了,我因为站得比较靠前,甚至都没看见她的真容。

不谈演唱会的效果,单从人格魅力方面排个名,我认为是:B>A>D>C。

C来晚不说,还喧宾夺主,扭扭捏捏,一度引起气氛尴尬;D的选择就明智得多,既然来晚了,干脆就借机鼓动全场大合唱,煽动气氛;A作为主角,镇住全场当仁不让;B自己也很红,但作为嘉宾甘当绿叶,衬托主角,我认为更胜一筹。

说上面这些是为了说明,人格魅力是要看场合的。在一个圈子里最牛的人,到了另一个圈子可能就只是小透明,找准自己的定位,再分析采取什么样的策略。

我在另一个答案怎样塑造自身的人格魅力?中,给“人格魅力”下了个我自己的定义:
【人格魅力】归根结底就是一个人能提供的【正能量的输出】的总和。
所谓的人格魅力固然来自内心的强大和充实,但我想强调的是“输出”二字。就算内在一时还达不到,但“输出”却可以通过外在的表现来弥补。
人格魅力(外在输出)×能力(内在积淀)=一个人的总能量。
在大多数情况下,前者略微大于后者,是比较理想的搭配。
也就是说,你表现出来的所谓的“脾气”,不要超过你所谓的“本事”。

所以说。如果你是小透明,就安静地做一枚小透明。假如你处在一群大牛、或者是一群和你不同领域的人之中,在专业方面没什么可跟人比的,这时你唯一拿得出手的正能量就是“谦虚”、“随和”、“温暖”、“周到”这些相对不需要“干货”的特质了。请不要刷存在感,也不要碍手碍脚。在需要你的地方,默默做好自己的事情就够了。


就像她一样。认真做事、不浮夸的人是会赢得尊重的。

那好,现在知道什么时候该低调了。那在大多数情况下,我们身处的圈子,和我们本人没有太大的差距,大家的level比较平均。在这种情况下,也想提高个人魅力、脱颖而出怎么办?固然,人格魅力归根结底来源于自身的修养,但恰当地表现也非常重要。

可能在生活中,我们很难长期观察一个人的为人处事,归纳TA的人格魅力是如何展现的;但在影视作品里就不一样了。在有限的篇幅中,创作者会用最简练的手法让你确信,这个人是个人格魅力很强的人。


为什么我们觉得他有人格魅力?他机智,幽默,冷静,乐观,在黑色的集中营中,能给孩子创造一个美好的童话。在死亡降临的时候,还能从容地面对。


为什么我们觉得他有人格魅力?他亦正亦邪,表面傲慢,内心有爱,关键时刻坚持原则。


为什么我们觉得她有人格魅力?她敢于挑战世俗,目标明确且不择手段,敢爱敢恨,坚韧不拔。否则,同样富有人格魅力的白瑞德也不会奋不顾身地爱上她。

他们有个共同的特征:反差
不管是与大环境的反差,还是与自身的反差,这种反差是令人脱颖而出的关键。当然,是正面、积极的反差。
与大环境的反差:学生组织选干部,如果一群学生都是乳臭未干,吵吵闹闹,长篇大论讲述自己的丰功伟绩,表明自己的远大理想和为组织奋斗终生的目标,那么如果有一个人不卑不亢,温文尔雅,对别人表现出极大的谦让和尊重,那么这个人就很容易被认为有人格魅力。
与自身的反差:你要去采访一位科学家,他搞的研究都是很枯燥的那种,你以为他本人一定非常乏味,结果一见面,对方其实很开朗有趣,妙语连珠,笑声不断,在潜心学术的同时也是个很热爱生活的人,那么你就会觉得对方很有人格魅力。

人类的人格是由很多元素组成的,反差之所以会让人认为有魅力,是因为反差塑造出一种丰富、立体的人格结构。就像一颗每个面都有不同光泽的宝石,一眼看去无法窥其全部,就显得很有魅力。

所以,如果你想提升自己的人格魅力,可以从我上面说的“场合”和“反差”两个角度入手。当你处于一个场合当中,先迅速地判断一下他人的特点,找到自己的定位;然后,有选择性地输出你性格中可以制造“反差”的部分。

当然这一切都建立在你有干货的基础上,我给人格魅力的定义是正能量的输出,不管是什么方面的正能量,你首先得有正能量。至于如何增加干货,就不赘述了。