Data Analysis & Simulation

Archive for the ‘EasyFit’ Category


EasyFit Makes Your Statistics Study a Piece of Cake

Tuesday, September 24th, 2013

Have you ever been busy with your statistics home work, a course work or a Ph.D. thesis requiring the use of probability laws, and wondered if there was an easy way to do the calculations?

Well, you can spend a few evenings at the nearest café with a cup of coffee and a cake, trying to do the math yourself.

Or, you can buy a monthly subscription license for EasyFit and have your calculations done in minutes.

And you will still have money left to enjoy a tall cup of Caffè Latte.

Use EasyFit For Just $1 a Day or Less

Thursday, September 19th, 2013

If you have been recently evaluating EasyFit and find it useful for your short or medium term projects, but cannot justify the upfront cost of the Perpetual License, today we have very good news for you: at just $1 a day or less, you can use the fully functional version of the product without the need to purchase the Perpetual License.

You can subscribe for a minimum of one month, and the monthly price drops as you subscribe for a longer term, going from around $0.80 a day for a three-month license, down to less than 55 cents a day when purchasing the annual license (see detailed pricing).

Note that the subscription fee is non-recurring: once your license expires, your credit card will not be charged, but you can still continue using the product to open your existing EasyFit project files and view the analysis results obtained during the subscription term.

For subscription ordering details, click here.

Understanding the Log-Normal and Log-Gamma Distributions Parameterizations

Tuesday, September 11th, 2012

Naturally, when dealing with some particular probability distribution that fits to many of your data sets well, one day you will want to learn more about that distribution. What is so specific to it that it works for your data? And, moreover, how can you interpret the distribution parameters?

The good news is: for many probability distributions, the meaning of their parameters is described in the scientific literature. The classic example is the Normal distribution having two parameters: σ (scale) and μ (location). The parameterization of this distribution is pretty easy to understand: as you change the location parameter, the probability density graph moves along the x-axis, while changing the scale parameter affects how wide or narrow the graph is:



However, for quite a few of distributions, modifying the parameters and observing how the graphs change will be of little help for your undetstanding of what those parameters indicate. One of such distributions is the Lognormal model defined as “a continuous probability distribution of a random variable whose logarithm is normally distributed.” What does that mean exactly? For better understanding, compare the CDFs of the Normal and Lognormal distributions:


Normal distribution CDF
Normal distribution CDF


Lognormal distribution CDF
Lognormal distribution CDF

As you can see, the Normal model is “included” into the Lognormal in such a way that in the Lognormal model, ln(x) has the Normal distribution with the same parameters (σ, μ) as the original Lognormal distribution. And this is the key point to understand: the parameters of the Lognormal model are not the “pure” scale and location (pretty intuitive in the Normal model), but rather the scale and location of the included Normal distribution.

The same logic applies to the Gamma and Log-Gamma pair of distributions. The classical Gamma has two parameters: α (shape), β (scale), and the Gamma CDF is as follows:


Gamma distribution CDF
Gamma distribution CDF

The shape parameter indicates the form of the Gamma PDF graph, while the scale factor affects the spread of the curve. Similarly to the Gamma model, the Log-Gamma distribution has two parameters with the same names (α, β), but its CDF has the form:


Log-Gamma distribution CDF
Log-Gamma distribution CDF

Just like with the Normal & Lognormal analogy, in the Log-Gamma model, ln(x) has the Gamma distribution with the same parameters (α, β) which cannot be treated as the “pure” Log-Gamma shape and scale, but the shape and scale of the included Gamma model.

There are dozens of different probability distributions out there, and even if you use only a couple of them on a daily basis, sometimes it can be hard to remember the meaning of all the parameters. That is why we decided to include a little feature in EasyFit that helps you keep your memory fresh: when moving the mouse pointer over a distribution parameter edit box, EasyFit displays a pop-up hint indicating the meaning of that particular parameter:


Distribution parameter hint
Distribution parameter hint

Using this feature, you can better focus on your core analysis rather than the technical details like the ones outlined in this article.

Probabilistic Analysis vs Time Series Analysis

Wednesday, December 14th, 2011

Can EasyFit be used to analyze time series data? To answer this question we recently received from a customer, we will shed some light on the differences between the probabilistic analysis and time series analysis.

When dealing with time series data, you usually have as an input a set of (time, value) data pairs indicating the consecutive measurements taken at equally spaced time intervals. The goal of time series analysis is to identify the nature of the process represented by your data, and use it to forecast the future values of the time series being analyzed.

A widespread application of such an analysis is weather forecasting: for more than a century, hundreds of weather stations around the world record various important parameters such as the air temperature, wind speed, precipitation, snowfall etc. Based on these data, scientists build models reflecting seasonal weather changes (depending on the time of the year) as well as the global trends – for example, temperature change during the last 50 years. These models are used to provide weather forecast for government and commercial organizations. In a typical forecast, the predicted values are not assigned probability: “In May, the maximum daily air temperature is expected to be 22 degrees Celsius.”

In contrast to the predictions based on time series analysis, when performing probabilistic analysis, you get not just a single value as a forecast, but a probabilistic model that accounts for uncertainty. In this scenario, you would obtain a continuous range of values and assigned probabilities. Of course, for real world applications, it is more practical to deal with specific values, so the probabilistic models are used to obtain predictions at fixed probability levels. Considering the above example, a forecast might look like: “In May, the maximum daily air temperature will be 22 degrees Celsius with 95% probability.”

So can distribution fitting be useful when analyzing time series data? The answer depends on the goals of your analysis – i.e. what kind of information you want to derive from your data. If you want to understand the connection between the predicted values and the probability, you should fit distributions to your data (just keep in mind that in this case the “time” variable will be unused). On the other hand, if you need to identify seasonal patterns or global trends in your data, you should go with the “classical” time series analysis methods.

EasyFit Used for Probabilistic Currency Forecasting

Monday, February 21st, 2011

Because risk and uncertainty are a part of literally all areas of our life, with the finance being one of the most important areas, scientifically based risk management methods are gaining more and more popularity among the finance industry professionals. Currency fluctuations affect all businesses dealing with multiple currencies, so having at least some degree of certainty about the future exchange rates can be a significant success factor for any international enterprise. A wide range of currency forecasting methods have been developed, however, not many of them can pretend to be reliable in the long run: most algorithms only work for a short period of time, and need to be tweaked as the market conditions change.

Brijen Hathi, a Research Fellow at the Planetary & Space Sciences Research Institue, performs his own research in the field and publishes the results in the Currency Forecasting Blog. The forecasting methodology employed by Mr. Hathi is in part based on the same techniques used in probabilistic risk analysis. Like with most modern forecasting methods, in this approach, he uses historical data to predict the future, but the big difference here is that he also assigns specific probabilities to the predictions. For example, for a US-based company doing business in the UK, it doesn’t really matter what the exact GBP/USD exchange rate is going to be during the next 30 days, as long as it stays within a specific interval with a high probability (95% or more). Recently Mr. Hathi has published an article highlighting the use of EasyFit to model pricing probability of the Pound Sterling versus the US Dollar from historical data. It is fascinating to see how EasyFit is being used in (what we believe) a truly scientific approach to data analysis, and we hope to see new developments in this area soon.

EasyFit Used to Improve the Forecasting of Software Project Status

Monday, November 29th, 2010

The software development community struggles with a way to identify if their projects are on-schedule given the inherent risks of constant invention that inevitably has elements of uncertainty and risk. Current practice is for developers to estimate a software project, and attempt to consider (up-front) all variations to get a viable estimate of time and cost. This process is laborious, and even with due rigor, project slip when the realization that estimates versus actual times fail to match. This leads to costly project overruns and lack of trust in future estimates.

As part of the Agile movement for software development, we think there is a better way and are championing the use of Monte-Carlo simulation as a ways of assessing likely progress and dealing with delays as early as possible… read the full case study

EasyFit in Academia

Tuesday, August 3rd, 2010

Over the last five years, we have been adding new features to EasyFit mostly with business users in mind, but thanks to the nature of the product and the special academic pricing, it has become quite popular among the academic community: a quick search in Google reveals numerous research papers referring to EasyFit, just to name a few:
 

  • “Co-evolution of Social and Affiliation Networks” (University of Maryland, USA) [link]
  • “Power laws in top wealth distributions: evidence from Canada” (Brock University, Canada) [link]
  • “Duration of Coherence Intervals in Electrical Brain Activity in Perceptual Organization” (RIKEN Brain Science Institute, Japan) [link]
  • “Resource Management Schemes for Mobile Ad hoc Networks” (National University of Singapore, Singapore) [link]
  • “Modelling the diffusion of innovation management theory using S-curves” (University of London, UK) [link]

(see the larger list of papers using EasyFit)

It is pleasing to see EasyFit helping researchers in such diverse disciplines get their job done in a more efficient way.

EasyFitXL Is Now Compatible With Excel 2010

Monday, July 12th, 2010

EasyFitXL – the distribution fitting add-in for Excel – was first introduced with the release of EasyFit 4.0 back in 2007. When designing EasyFitXL, we did a lot of research as to which Excel versions to support. At that time, the latest version of Excel was Excel 2007, which included some new useful features, such as the support for larger worksheets and multi-threaded worksheet recalculation capability. However, many customers were not rushing to upgrade to Excel 2007 because of it’s controversial Ribbon Interface, so we had to make EasyFitXL compatible with the previous version – Excel 2003.

According to some publicly available data, Excel 2002 and Excel 2000 still had a considerable user base, so we have made a decision to support these two older versions as well. As a result, EasyFitXL initially included support for Excel versions from 2000 through 2007, covering perhaps over 99% of all Excel installations in the world.

Last month Microsoft has released Excel 2010 which does not make a big difference in terms of data analysis, however, with its release we started receiving compatibility complaints from our customers, so we performed an in-depth testing and released an updated version of EasyFit (available for download).

EasyFit 5.3 Released

Wednesday, January 20th, 2010

Recently a customer has contacted us and noted that the Inverse Cumulative Distribution Function (the Quantile Function) of the Inverse Gaussian distribution implemented in EasyFit works well for lambda=1902.1, mu=41857.0 and P=0.9, but fails for the same lambda & mu and P=0.99. Last week we have released an updated version of EasyFit that fixes the problem, and in this post we would like to elaborate more on the issue.

Evaluating the Inverse CDF of the Inverse Gaussian Model
Since the CDF of the Inverse Gaussian distribution is quite complicated (expressed in terms of the two Laplace Integrals), the Inverse CDF of this model is not available in closed form, and cannot be easily evaluated for a given set of distribution parameters. Initially, we have implemented an iterative approximation algorithm that evaluates the ICDF(P) using the CDF as well as the PDF to speed up the calculation. The algorithm itself works very well over a great range of input parameters, however, we have placed a limitation on how many iterations it is allowed to perform.

Because EasyFit is considered an interactive data analysis tool, we are always looking for a balance between the feature set and the performance, which is especially important when using EasyFit with Excel worksheets calculated in real time. The limitation on the number of iterations is necessary to make sure the algorithm doesn’t fall into an “infinite loop”, meaning the situation when it’s unable to reach the specified accuracy regardless of how long it continues to work. The problem usually happens when we are hitting the precision limitations of the computer’s CPU: in theory, the algorithm must converge in a limited number of steps, but in reality, it will just continue iterating over and over again without any accuracy improvements.

As a solution, we have made some improvements to the algorithm, making it more robust and efficient, so it now works with the same accuracy, but for a larger range of input parameters. For example, considering the parameters that initially caused the problem (lambda=1902.1 and mu=41857.0), the ICDF(P) can be evaluated for values of P up to 0.999925, which is more than enough for most statistical analysis applications.

Should You Upgrade?
Since this minor issue does not affect the accuracy of distribution fitting, you only need to upgrade if you are experiencing problems evaluating the Inverse CDF of the Inverse Gaussian distribution for P>0.9, otherwise EasyFit 5.2 will still work well for you.

New Version of EasyFit Available

Monday, June 1st, 2009

We have just released EasyFit Version 5.1 – the update that fixes a bug causing an incorrect calculation of the chi-squared GOF statistic for small sample sizes. To upgrade, uninstall EasyFit 5.0 from your computer, then download and install the latest version.

EasyFit: select the best fitting distribution and use it to make better decisions. learn more
EasyFit Screenshot - Click To Enlarge
Download Free Trial