Leave a comment

Important New Research on Client Profiling

Client profiling is an area that is ignored by most financial planners. New research by

Professor Shachar Kariv of University of California, Berkeley demonstrates why this is a mistake.

A Three-Dimensional Challenge

As a planner, your job is to understand and balance the three dimensions that are common to every client: their goals, constraints and preferences. To do this at a high level you need to explore each area separately and document your findings.

A skilled planner can guide clients through an exploration of their hopes and dreams for the future. But clients often do not have sufficient resources to meet all their goals. Understanding the relative priority of goals allows you to provide better advice when trade-offs are necessary.

Step two is understanding the constraints that affect your clients’ ability to reach their goals. Time and current financial assets are the most common. Most planners are pretty good at collecting the facts about a client’s time horizon and financial constraints.

The area where many planners struggle is understanding a client’s preferences. “Preferences,” in this context, is an umbrella term that includes emotional and cognitive characteristics that affect how the client perceives the world and makes decisions. Capturing your clients’ preferences requires creation of a “behavioral portrait.”

How to Develop a Behavioral Portrait for Each Client

The industry has traditionally used risk tolerance questionnaires to fill this need. But Kariv’s research has shown that these questionnaires have a number of inadequacies.

First, stated preferences regarding risk tolerance are not particularly reliable. Clients are bad at understanding and reporting their own risk tolerance.

Second, risk tolerance changes over time. It is a dynamic, not a static characteristic.

Another problem with traditional risk tolerance questionnaires is they focus on only one aspect of a client’s risk persona. There are actually four distinct aspects of every client’s risk persona.

The first is risk requirement. That is the amount of risk that should be taken to reach their stated goals. It is essentially a math problem.

The second is risk capacity. That is the amount of risk a client can afford to take given their current constraints of time and financial resources.

The third is risk tolerance. Kariv’s research suggests that risk tolerance actually has multiple components. They include risk aversion, loss aversion and ambiguity aversion.

Risk aversion relates to a client’s willingness to assume risk.

Loss aversion relates to how a client weighs the potential for gains and losses and the way they react to losses once they occur.

Ambiguity aversion relates to how comfortable a person is making decisions in an environment of uncertainty.

To appropriately tailor the advice we give clients, we should understand the multiple components of their risk tolerance. They are all measurable.

The fourth aspect of a client’s risk persona is risk perception. This refers to a person’s attitudes about the likelihood of bad events occurring. Knowing where your clients stand on the optimist/pessimist scale can help you better counsel them before and after bad events.

An Opportunity to Stand Apart

Kariv’s research shows that we still have a long way to go in understanding our clients and being able to appropriately advise them based on their unique goals, constraints and behavioral characteristics. You can learn more about his research and some of the tools he and his colleagues have developed to address this problem at www.TrueProfile.com (click on “The Science” tab) or from his recent First Ascent Master Class webinar at: https://goo.gl/davH4o.

scott-mackillop

Scott MacKillop is CEO of First Ascent Asset Management, a registered investment adviser based in Denver that provides outsourced investment management services to financial advisers and their clients. He is a 40-year industry veteran and can be reached at scott@firstascentam.com.

 

 

 

 


Leave a comment

You Need Proactive Risk Management

Picture this scenario: you find the perfect employee. You groom her and train her to be your right hand—the one person (other than you) you need to keep your practice running smoothly. Then some other company comes and woos her away, promising a bigger salary and a better career path.

She leaves and you’re scrambling. Nobody knows how to do her job. Your business slows down because you need to do her job and your job until you can find somebody else. Then when you do find somebody else, you have to take the time to train them.

That scenario presents a risk to your business.

Vanessa Oligino, director of business performance solutions at TD Ameritrade Institutional, in her October FPA Annual Conference presentation, “Securing Your Firm’s Future,” said addressing a situation like that before it happens is part of proactive risk management.

Oligino explained that proactive risk management is about protecting your brand, reputation and future. Part of that proactive risk management is ensuring the continuity of your business in the event of a change to your team.

Oligino provided tips to attendees to ensure that your greatest asset—your team—is engaged and solid in case of blips like this.

Have ongoing communication. This will help you know what’s going on with your key employees. Are they feeling unhappy? Are they feeling stuck or like you’re not giving them enough challenging work? Is their work/life balance off? Are they burning out? Does the career trajectory you offer them match what they’re looking for? Are you paying them enough? Offering attractive benefits?

Have a plan. Have a plan for major “in case” scenarios such as: if an employee commits fraud; if there’s a natural disaster; if you make a bad hire; or if a key employee leaves. Oligino gave the example of the protégé you trained who is the only one who knows how to do her job. She recommended having something similar to understudies to each key role in case they leave, there will at least be one other person who is able to do their job.

Stay up-to-date with compliance and legal requirements. Make it a priority to update your employee handbook. Ensure you’re up-to-date when it comes to compliance issues and conduct annual compliance training.

Verify, verify and verify again. Ensure your employees are verifying that the senders of emails they’re exchanging with clients are, in fact, being sent by the clients. In more than one session at FPA Annual Conference, presenters talked of planners “verifying” information via email because the sender knew information only the client would know only to be the victim of fraud.

“Prioritize making progress on areas of vulnerability,” Oligino said. “There is no risk to your business that is safe to ignore.”

anaheadshot

Ana Trujillo Limón is associate editor of the Journal of Financial Planning and the editor of the FPA Practice Management Blog. Email her at alimon@onefpa.org. Follow her on Twitter at @AnaT_Edits.

 


Leave a comment

Rethinking Volatility: How to Measure Risk in Portfolios

On any given day, our Portfolio Construction Services (PCS) team holds several adviser consultations in which we discuss potential risk and opportunities within their portfolios, based on their clients’ tolerances and objectives. In many of these calls, our discussions tackle similar topics and concerns, one of the most prevalent of which is volatility. In this post, we’ll give you an overview of volatility and discuss why rethinking it—and the way it’s measured—could lead to well-balanced portfolios.

Standard Deviation vs. Downside Risk: The Good, the Bad and the Volatility

Naturally, volatility is of great concern for both financial advisers and end clients. But generally the framework with which we view volatility is negative. It loses us money. And while that can be true, it’s only half the story. The major metric for overall volatility for many is standard deviation. While we believe standard deviation is a good place to start regarding volatility, a deeper dive into the metric is important to understanding a deficiency that otherwise may go unnoticed.

Let’s get nerdy for a quick minute. Data can be distributed, or spread out in different ways. In many cases, data tends to be around a middle value (the mean) with no bias left or right and, if plotted on a graph, takes the shape of a bell (often referred to as a “bell curve”). This is called a normal distribution. Put another way, in a normal distribution, approximately 50 percent of the values fall below the mean, and 50 percent fall above it.

Standard deviation is a measurement of variability that shows how much dispersion there is from that middle value. This is how we measure volatility. On a normal distribution, the calculation for standard deviation takes into account observations both to the right of the mean (positive numbers), and observations to the left of the mean (negative numbers). Those positive observations to the right of the mean represent good volatility. Capturing upside volatility is what generates wealth over time. In fact, it is the reason we invest in the first place. Without it, we’d be better off putting our money under the mattress.

In our PCS reports, which we run using adviser models and walk through during our consultations, we isolate and remove this “good volatility” by including downside risk as a metric. Downside risk eliminates all those positive observations to the right of the mean and focuses solely on those on the left side of the normal distribution—what could be referred to as “bad volatility.” Capturing more of these observations is what erodes wealth over time. Something we are looking to minimize or avoid regardless of our time horizon or risk tolerance. How does the relationship between standard deviation and downside risk manifest itself in models?

In theory, the goal is to have your overall volatility—as measured by standard deviation—as close to your particular benchmark as possible so you capture as much of the upside as possible while keeping your downside risk as far away from the benchmark as possible. This would “skew” your model’s normal distribution so the “bell” is farther to the right than that of the benchmark, which would mean you captured more of the positive volatility and less of the negative volatility in the market. Makes sense right? Sure, but just like all theories, in practical application, it is easier said than done.

On the PCS team, we understand this challenge so we focus on the percentage change, particularly the reduction, of these two measures relative to one another.

For example, a “moderate portfolio” benchmarked against the Morningstar Moderate Target Risk portfolio may have a standard deviation of 7.81 versus the benchmark at 7.91. The model has 1.2 percent less overall volatility than the benchmark. But if one looks at the downside risk, where the model has a 3.96 versus the benchmark with 4.50, the model has 12 percent less “bad volatility” than the benchmark. Clearly, this portfolio’s volatility is in-line with the benchmark, but its bad volatility is quite a bit less, proportionally. Therefore, we would expect a portfolio like this to perform better than the benchmark over the long term.

Rethinking how we view volatility—especially via standard deviation—while acknowledging its relationship to downside risk is an important component of constructing a well‐balanced portfolio.

By the Janus Henderson Portfolio Construction Services Team