Tessa Wilkie /stor-i-student-sites/tessa-wilkie PhD student in Statistics and Operational Research at STOR-i CDT, ĚÇĐÄĘÓƵ Sun, 03 May 2020 17:42:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Dealing with Imputation Uncertainty /stor-i-student-sites/tessa-wilkie/2020/05/01/dealing-with-imputation-uncertainty/ /stor-i-student-sites/tessa-wilkie/2020/05/01/dealing-with-imputation-uncertainty/#comments Fri, 01 May 2020 10:48:34 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=277 This post tackles a popular method that helps you understand the amount of variability you have introduced to your analysis through replacing missing data with estimated values. This variability is known as Imputation Uncertainty.

If you haven’t read my first two posts on Missing Data, it might be worth taking a look before you read this. You can find the first post here, and the second, here.

I had some misgivings about imputation before I learnt about methods to quantify imputation uncertainty.

My misgivings centred around the fact that with imputation we are sort of making the data up (in a statistically rigorous fashion, of course!). But even so, how happy could we be with our analysis after imputing?

It turns out we can use a method that gives us insight into how much variability is down to the fact that we have imputed missing data.

This can help us to understand how confident we can be in our statistical analysis, given that it is based in part on missing data.

One popular method that gives us a measure of imputation uncertainty is Multiple Imputation.

How do we do Multiple Imputation?

  • Firstly, we create an imputed data set using any method that involves taking draws from a predictive distribution.
  • We repeat this, to create M imputed data sets.
  • We can analyse these data sets, to come up with estimates of parameters we are interested in.
  • We can then combine these estimators. There are also formulas that we can apply to calculate within imputation variance, across imputation variance, and overall variance.
  • These can give us an idea of how much of the variability in our estimates is down to the imputation process.

Multiple Imputation isn’t the only method that can help us with Imputation Uncertainty. You can read more about them in some of the references below.


Further reading

You can find out more about Imputation Uncertainty in Chapter 5 of the below book. Multiple imputation is discussed in Chapters 5 and 10.

Little, R. J. A. and Rubin, D. B. (2020). Statistical analysis with missing data. Wiley Series in Probability and Statistics. Wiley, Hoboken, NJ, third edition.


This paper below contains a nice summary of Multiple Imputation and goes on to discuss the issue of variable selection. In other words, it considers what to do if your different imputed data sets imply that different variables are valuable and should be kept in a statistical model, while others should be discarded.

Wood, A. M., White, I. R., and Royston, P. (2008). How should variable selection be performed with multiply imputed data? Statistics in Medicine, 27(17):3227-3246.


And here is a long report I wrote as part of my studies at STOR-i on the broader topic of Missing Data: click to read. It discusses Imputation Uncertainty, and other issues, in more depth.

]]>
/stor-i-student-sites/tessa-wilkie/2020/05/01/dealing-with-imputation-uncertainty/feed/ 2
What to do with missing data? /stor-i-student-sites/tessa-wilkie/2020/05/01/missing-data-part-ii-what-to-do-with-missing-data/ /stor-i-student-sites/tessa-wilkie/2020/05/01/missing-data-part-ii-what-to-do-with-missing-data/#comments Fri, 01 May 2020 09:05:59 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=235 In this post I’m going to describe some simple ways of dealing with missing data and discuss some of their strengths and flaws.

How methods to deal with missing data will depend on the Missingness Mechanism that informs the missingness pattern. I describe Missingness Mechanisms in my previous post, which you can find here. If you haven’t read that, you may want to take a look before continuing with this post.

There are four techniques I’m going to describe: Complete Case Analysis, Unconditional Mean Imputation, Regression Imputation and Stochastic Regression Imputation.

I am going to illustrate these methods with an example. Imagine we are interested in racehorse performance and we conduct a survey of racehorse heights and weights. Unfortunately, some of the weight data has gone missing.

I wrote about Missing Data for a second, longer research report at STOR-i (I mentioned the shorter first report in this post). I created the images below from a simulation study that I carried out for this report (where I followed the racehorse example described above). I will post the link to the full report at the end of this blog.

Complete Case Analysis

One of the simplest ways of dealing with missing data is Complete Case Analysis. This means that we delete any responses that have any missing data in them. This method is okay if we have data that is Missing Completely at Random or if we do not have a lot of missing data. But, if we do not have these — nice — conditions then we can introduce bias into the dataset.

For example, if tall horses do not like being weighed (perhaps the weigh bridge is claustrophobic), then if you delete all cases where weights are missing you will probably be looking at a sample with lower heights and weights than is representative of the population.

So, if you don’t use Complete Case Analysis there are still some fairly simple options open to you. These next three methods involve forms of something known as imputation.

Imputation? What is that?

The next method is the first I introduce where you impute data — this means you replace missing values with estimated values.

This may seem strange at first — isn’t this just making up values? How could this make your analysis better?

Well, imputation can help you avoid the pitfalls that simply deleting missing data could land you in.

However, you will (I hope) see that how you decide to impute the missing data can drastically affect the rigour of your analysis.

Unconditional Mean Imputation

The first method I am going to describe is the simplest: Unconditional Mean Imputation.  You take the mean of your observed data and impute any missing values with it. So, if we have missing racehorse weights, we fill in missing weight values with the average of those that we do observe.

This, you can probably imagine, can be problematic, too. If we look at the below plot, where I’ve imputed data that is simulated from the above situation — where taller horses are more reluctant to be weighed — we can see that the imputation will underestimate the true mean and the variation in the data.

Unconditional Mean Imputation: missing weight data is imputed using the mean of the observed weights

Regression Imputation

Regression Imputation is a little better as in the situation we are considering it should help to eliminate some of the bias that Unconditional Mean Imputation brings.

This works by drawing a regression line, based on the complete observations that you have, and using that to predict where a piece of missing data will fall on the regression line.

The method works better in terms of bias, but it still underestimates the variation in the data — as we can see in the figure.

Regression Imputation: the imputed data (represented by pink circles) are on the regression line

To deal with that we have our final method, Stochastic Regression Imputation.

Stochastic Regression Imputation

This methods works in a similar way to Regression Imputation — but it adds a random element so, instead of points being interpolated directly onto the regression line, they are scattered about it in a random fashion.

As we can see in the plot, this shows a much more realistic representation of the variation in the data.

Stochastic Regression Imputation: the imputed circles (in pink) are scattered randomly about the regression line

You may be thinking that introducing values yourself to replace missing ones will cause its own problems.

Read my next post to find out how to account for the uncertainty that you are introducing through imputing missing data.


Want to know more?

You can read more about Complete Case Analysis (and some variations/adaptations of it) in Chapter 3 of the below book. The Single Imputation Methods I’ve described are discussed in Chapter 4.

Little, R. J. A. and Rubin, D. B. (2020). Statistical analysis with missing data. Wiley Series in Probability and Statistics. Wiley, Hoboken, NJ, third edition.


Another single imputation method that I do not discuss here (but I do in my report) is Hot Deck. You can read about it in this paper:

Andridge, R. R. and Little, R. J. (2010). A review of hot deck imputation for survey non-response. International Statistical Review, 78(1):40-64.


I used the R Package to do my analysis. You can read about it in this paper:

van Buuren, S. and Groothuis-Oudshoorn, K. (2011). mice: Multivariate imputation by chained equations in R. Journal of Statistical Software, 45(3):1-67.


Thank you for reading! Click here to go to the next post on Missing Data.

Click here to see my first post on Missing Data: on the different types of missingness.

You can see my full report for STOR-i on Missing Data, .

]]>
/stor-i-student-sites/tessa-wilkie/2020/05/01/missing-data-part-ii-what-to-do-with-missing-data/feed/ 2
Missing Data: Introducing the Missingness Mechanism /stor-i-student-sites/tessa-wilkie/2020/04/29/missing-data-part-1-introducing-the-missingness-mechanism/ /stor-i-student-sites/tessa-wilkie/2020/04/29/missing-data-part-1-introducing-the-missingness-mechanism/#comments Wed, 29 Apr 2020 09:07:00 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=231 Often when we collect data, some is missing. What do we do? Well, there is a load of stuff to cover here (and I’m going to do it over a few posts). This post is going to cover an important question: what is causing the data to be missing?

What causes the data to be missing is known as the Missingness Mechanism. There are three main types: Missing Completely at Random (MCAR), Missing at Random (MAR), and Missing Not at Random (MNAR).

You can think of these as traffic lights: MCAR is green (easy to deal with), MAR is amber (a bit problematic but there are some decent methods out there) and MNAR is red (a total pig to deal with).

Missing Completely at Random
Markus snoozes peacefully, having decided that his data is MCAR

As the name suggests, Missing Completely at Random data means that the missingness presenting in your data is in a totally random pattern. There isn’t anything in the data driving it that you need to be further concerned with. This is nice, because you can get away with some simplistic methods to deal with it.

I describe some of those methods in my next post on missing data.

Missing at Random

Missing at Random data is where what drives the missingness is something in the data we are collecting, but that what drives it is something we have observed. The preceding sentence starts to give me a headache if I think about it too much, so I prefer to think of it in terms of an example.

Imagine a university does a survey of previous students, to find out where they are working, what their income bracket is, etc.

Let’s say that alumni that work in a particular sector are less likely to disclose their income. But, they do disclose what sector it is that they work in. That data would be Missing at Random.

Missing Not at Random

But, what if students are less likely to respond to that income question the more they earn? Then we have Missing Not at Random data. The missingness depends on something we do not observe.  

This is very difficult to deal with and often causes bias in our analysis. To make it even more difficult, we cannot test whether the missingness mechanism is Missing at Random or Missing Not at Random.

Want to know more?

You can read more about missingness mechanisms in Chapter 1 of the book below — this is a really good book on missing data in general.

Little, R. J. A. and Rubin, D. B. (2020). Statistical analysis with missing data. Wiley Series in Probability and Statistics. Wiley, Hoboken, NJ, third edition.

I’ve also included a link to the paper that introduced the idea of considering missingness mechanisms.


Thank you for reading. Click here to see my next post in this series. This will discuss some simple methods to deal with missing data.

Or you can skip to my final post on missing data: this will discuss a method that allows you to quantify the uncertainty that you are introducing into your analysis by using some of the methods discussed in my second post.


I wrote a 20 page report on Missing Data as part of my studies at STOR-i. It discusses the ideas above in more depth. If you want to take a look, .

]]>
/stor-i-student-sites/tessa-wilkie/2020/04/29/missing-data-part-1-introducing-the-missingness-mechanism/feed/ 2
Algorithm Aversion /stor-i-student-sites/tessa-wilkie/2020/04/22/algorithm-aversion/ /stor-i-student-sites/tessa-wilkie/2020/04/22/algorithm-aversion/#respond Wed, 22 Apr 2020 17:11:17 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=212 So, you’ve created a brilliant solution to an operational research problem. But — not everyone is using it. What’s going on? Read on to find out.

Operational researchers spend their time trying to come up with solutions to problems businesses face such as: how much stock a business should order in each week; the most efficient route a delivery driver can take; the most profitable combination of products to sell. 

But on the other side, are the businesses that are going to use these solutions. Researchers’ solutions might well be rigorous and elegant (and they should be), but: these solutions are going to be used by people.  

And these people can choose whether to use it or not.

It turns out they might take some convincing.

In the last 10 years, several papers have come out exploring what researchers can do to encourage organisations to use OR solutions — when they are better than human judgement alone. 

Not everyone, it seems, has absolute faith in the power of mathematical or algorithmic solutions to problems like forecasting.

My dog Markus (pictured below) for example, will almost certainly prefer to use his nose, plus a certain amount of running about in random directions, to search for snacks, over an .

Markus: and a very fine nose it is too.

On a more serious note, some studies have shown that people are less likely to use an algorithm for prediction if they have seen that it is can get things wrong. This is known as Algorithm Aversion[1]. If they know the algorithm is not perfect, they are put off from using it.

Anecdotally, I see this with — for example — political polling. A lot of people seem to write off polls as nonsense, because they don’t always get things 100% right. Either they are perfect and worth following, or they contain error and are rubbish.

Back to Algorithm Aversion: one way to overcome this [2] is to allow people to adjust the output of the algorithm, in a controlled manner.

Markus photographed moments after I tried to explain an optimised search strategy to him.

Dietworst, Simmons and Massey (2018) found that if people were allowed to adjust an algorithm’s forecast, they were happier with it. Restricting the amount that users could adjust forecasts did not make a lot of difference to their satisfaction.

Of course, in a real life situation, it may make a lot of sense for someone in a business to adjust a forecast produced by an algorithm: if they know something the algorithm doesn’t[3].

For example, if the business is about to launch a big advertising campaign or slash prices — or if a close competitor has just opened a shop right opposite yours.

This is a new area of research and so far relies on some limited field experiments, with sometimes seemingly contradictory results.

Beware our own expertise

A paper published last year[4] suggested that people were likely to choose an algorithm’s advice over that of other people.

However, they were a little bit less likely to pick an algorithm’s opinion over their own.

The paper also found that people they determined to be experts were much less likely to take algorithmic advice over their own opinion, and that that hurt the accuracy of their predictions.


[1] Dietvorst, B.J., Simmons, J.P. and Massey, C., (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), p.114.

[2] Dietvorst, B.J., Simmons, J.P. and Massey, C., (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3):1155-1170.

[3] Fildes, R., Goodwin, P., Lawrence, M. and Nikolopoulos, K., (2009). Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting, 25(1):3-23.

[4] Logg, J.M., Minson, J.A. and Moore, D.A., 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151:90-103.

]]>
/stor-i-student-sites/tessa-wilkie/2020/04/22/algorithm-aversion/feed/ 0
Censored demand /stor-i-student-sites/tessa-wilkie/2020/03/23/censored-demand/ /stor-i-student-sites/tessa-wilkie/2020/03/23/censored-demand/#comments Mon, 23 Mar 2020 17:41:55 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=183 Censored demand happens when a shop — or any other type of retailer — runs out of stock. How do they know how much more they could have sold? Having a good handle on this is important for retailers, particularly those that stock perishable goods.  This post will explore ways in which mathematical models can help them to do that.

As part of our assessments here at STOR-i we have to write a short report on a research topic of our choice (as well as a long one, which I’m working on now). For my short report I wrote about retail analytics, and particularly the issue of censored demand. You can see the link to the report at the bottom of this post.

Forecasting demand for retailers is a thorny problem. They need to estimate how much they are going to sell to decide how much stock to order.

But, unless they have very high levels of stock, they are probably going to have days when they sell out. So then how do they decide how much demand is actually out there?

This is a particularly big problem for those that stock perishable goods. If a retailer were to order in a mountain-sized pile of grapes, for example, they would have to throw away what they didn’t sell after just a few days.

Waste is something retailers need to avoid — not just for their profits, but there are global environmental reasons why we should all be trying to cut down on waste.

On the other hand, if a retailer regularly runs out of stock, they could find that customers decide to shop elsewhere.

Mathematical models can help us to overcome some of this uncertainty.

In my report I focused on parametric models. This means we assume that the demand conforms — in gross — to an underlying mathematical distribution.

This is helpful, because if we can observe a bit of the distribution, we can gain insight into what the bits we cannot see might be like.

More formally, we can use the observed demand (the demand recorded before the retailer ran out of stock) to make inference about the unobserved demand (the demand that the retailer didn’t fulfil after they ran out of stock).

I look closely at two methods in my report: one to deal with normally distributed data (Nahmias’ method) [1], and one to deal with demand that corresponds to a Poisson distribution (Conrad’s method) [2].

In my report I show that these methods work nicely, as long as we have picked the right distribution.

But not all retail demand behaves nicely and conforms to the distribution we assume (or sometimes any distribution at all).

What can go wrong?

What follows is an illustration of what can happen if we use a good method but we assume the wrong underlying distribution.

In the picture below (nabbed from my report), I’ve simulated data from a bimodal distribution.

In this case it is data from two normal distributions, with different means and variances. In the picture, I’ve plotted a histogram of the simulated data.

I create a right-censored data set by removing any data points with a value higher than 120. The removed data is represented by the dark blue columns.

I then look at what happens if I assume (mistakenly) that my data is normally distributed. So I then use Nahmias’ method to estimate the distribution based on the only data I can now use (the light blue columns).

Estimating censored demand: I’ve mistakenly assumed my data is normally distributed

And so I’ve got things very wrong. The red line represents what I think the true demand looks like. You can see I totally miss the second (unobserved) peak.

Nahmias’s method is really good on censored data that comes from a normal distribution but I’m deliberately tripping it up by giving it a nasty (but plausible) underlying distribution.

This is a simulation, so I known I’m getting it wrong. Bear in mind that if this was a real world situation I would only be able to see the light blue columns. Based on that, assuming normally distributed data maybe isn’t great, but it would not be completely silly either.

If you want to read more about this subject, below are some links to papers I mention in this post (as well as my report).


[1] Nahmias, S. (1994).  Demand estimation in lost sales inventory systems. Naval Research Logistics (NRL), 41(6):739–757.

[2] Conrad, S. (1976).  Sales data and the estimation of demand. Journal of the Operational Research Society, 27(1):123–127.

]]>
/stor-i-student-sites/tessa-wilkie/2020/03/23/censored-demand/feed/ 1
Why statistics? /stor-i-student-sites/tessa-wilkie/2020/02/11/why-statistics/ /stor-i-student-sites/tessa-wilkie/2020/02/11/why-statistics/#respond Tue, 11 Feb 2020 10:26:18 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=174 This a personal post on what I think statistics is, why I was drawn to study it, and why it’s basically a super awesome cool subject everyone should know more about.

Many people appear to regard the subject with suspicion, feeling that statistics are more often used to bamboozle than to enlighten.

When, in my 20s, I started studying Applied Statistics in the evenings I got a lot of puzzled looks from friends and colleagues. What on earth did I want to do that for? What was the point?

To me this seemed odd.

As a journalist you are always trying to find out what is going on, and what it might mean. In my History degree I looked to try to find out what went on in the past and what it might mean.

These questions are very similar to those that statisticians ask. Only, statisticians have an additional tool to help them — and it’s a big one: maths.

Mathematical frameworks help us understand what conclusions we can draw from data, and how confident we can be in them. They give us tools to deal intelligently with what we do not — or cannot — know.

And this is great. Because of course, in the real world, we never get all the information we need. We are always having to piece together a picture based on what we can see, and we need steer on what we cannot see and how important that might be.

For example: I might have a disagreement with my dog, Markus, about his biscuits. He might insist that Brand A’s dog biscuits are bigger than Brand B’s, and therefore he should be bought Brand A’s.

As he’s a scientifically minded dog, he would allow me to take a random sample of each to weigh.

What if, based on the sample, Brand A’s biscuits are slightly larger than Brand B’s? Are Brand A’s biscuits bigger, or could this be a fluke?

Well, there are statistical tests to decide whether there is actually enough evidence to accept Markus’s claim.

Markus protesting about biscuits

There are also well defined frameworks to assess the probability that you will wrongly accept the claim by chance (that is, you select a random sample that happens to be of unusually large biscuits from Brand A, when in fact Brand A’s biscuits are not bigger than Brand B’s).

Of course, advanced statistical methods deal with much more difficult and nuanced situations than my dog biscuit example.

Statistics has not advanced to the point where we can guarantee that we are right about everything all the time*. But that’s not a reason to dismiss it.

Statistical methods, if used properly, bring us a vast amount of insight into problems. Why wouldn’t you want to know about that?


*On the subject of things I’ve not always been right about: I was going to put something in this post about Benjamin Disraeli’s famous “Lies, damned lies and statistics” quote, only to discover that we don’t actually know who coined that one.[1]


[1]

]]>
/stor-i-student-sites/tessa-wilkie/2020/02/11/why-statistics/feed/ 0
Extreme value theory: predicting the ultra rare /stor-i-student-sites/tessa-wilkie/2020/01/28/extreme-value-theory-predicting-the-ultra-rare/ /stor-i-student-sites/tessa-wilkie/2020/01/28/extreme-value-theory-predicting-the-ultra-rare/#respond Tue, 28 Jan 2020 12:42:39 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=153

Extreme value theory is a really exciting — and kind of astonishing — area of statistics. This is because it can tell us about the probability of events happening that are so rare there is barely any data recorded on them.

This seems perverse. Very broadly, traditional statistics says that we may not able to make accurate predictions about what may happen on an individual level (for example, how tall one puppy may grow to be in adulthood). But, if we look at a large population (the development of large numbers of puppies) we can get an idea of the range that we expect the majority to be in.

With extreme value theory, we are not interested in the behaviour of the majority. We want to look at the likelihood of a very, very rare event happening. Such as a Dachshund puppy that grows to be bigger than a Doberman.

A Dachsund puppy

Ěý

Why would we want to know that? Well, let’s say you own a VW Beetle and you want to buy a Dachshund puppy. Your family is fiercely attached to the car and will only agree to getting a puppy if it means you will not need to sell the car.

You are pretty sure this will not happen. You promise them this could never happen. But then you start to worry: could the puppy grow to be too big to fit in the car? You’ve never heard of — or seen — a Dachshund that’s too big for a Beetle. But does that mean you can be certain?

The trouble with extreme events — from a statistical point of view — is that they do not happen very often, if at all. We might want to know the probability of a once in 1000 years type event. We do not have a large body of data that can give us steer on what and when these events might occur.

So, are we stuck?

No! Thanks to extreme value theory. Ěý

Statisticians can focus on the tails of the data — meaning they can examine the events that have a very low probability of occurring. They usually do this in one of two ways in the univariate setting.*

We can look at maxima over a certain period of time. For example, we could group Dachshunds according to the year they were born, then record the tallest in each year group. Surprisingly (to me) these tend to a distribution (the Generalised Extreme Value distribution).

This is Very Good News in statistics. It means we have mathematically backed insight into the way the population of maxima behaves.

What if we had two Dachshunds born in 2015 that grew very big? If we were looking for maxima we would only count the largest one, so we would be cutting out a potentially useful bit of data. A method that gets around this issue is to look at exceedances — data points that come above a certain threshold.

An unusually large puppy, with adult to scale

Ěý

If we decide that any Dachshund taller than, say, 40cm is remarkable then we can look at the distribution of Dachshunds that exceed that level. This would give us data that accords to a Generalised Pareto Distribution.

One of the big academic issues here is choosing that threshold level: set it too high and you don’t get much data. Set it too low and you are out of the tails of the distribution.Ěý

These theories have important applications — beyond prospective dog owners with families that love their car a little too much.

Flood defences are one area where governments need to know what a really, really bad flood would look like and how to protect people from it. But, because flood defences are expensive, they also don’t want to build ones that are bigger than necessary.

Finance is another area. How likely is an extreme financial or economic shock? What measures should be in place to ensure that institutions, and the financial system itself, can withstand it? Regulators would want to make sure they are not insisting on such strongly risk-averse measures that it is impossible for companies to make a profit.

*By univariate, I mean we are looking at just one variable. For example: height of dog, or observed temperatures or daily rainfall. We are not looking at several variables together (the multivariate setting).

Want to know more? There is a whole journal dedicated to Extreme Value Theory.

]]>
/stor-i-student-sites/tessa-wilkie/2020/01/28/extreme-value-theory-predicting-the-ultra-rare/feed/ 0
Hello! /stor-i-student-sites/tessa-wilkie/2020/01/08/hello/ /stor-i-student-sites/tessa-wilkie/2020/01/08/hello/#respond Wed, 08 Jan 2020 13:42:55 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=102 Welcome to my blog. I am a student on STOR-i’s MRes programme in Statistics and Operational Research with Industry.

This year we will explore a range of cutting edge research areas ahead of choosing one to focus on for the dissertation. I will be writing about some of the exciting research topics we encounter, so make sure to check back here for updates.

I joined STOR-i after working for more than a decade in journalism. You can read more about my background and my reasons for studying at STOR-i on my About Me page.

In the picture you see me exploring the countryside with my dog, Markus. That is one of the things I love to do in my free time.

]]>
/stor-i-student-sites/tessa-wilkie/2020/01/08/hello/feed/ 0