|What is personalisation?||5|
|Understanding audiences: Data.||8|
|Trend and data analysis.||23|
|Practical personalisation: Tips for success.||30|
We all want to be treated as individuals. A name and not a number. As organisations look to introduce “self-service” models, partially driven by the need to reduce costs and partially because millennials prefer to engage digitally, it inadvertently results in removing the all-important human touch from the experience.
Humans are social animals and we naturally seek the companionship of others. The whole human-social dynamic is built upon us establishing and maintaining close relationships — constantly fine-tuning how we engage with each other. The old adage that “people buy from people” cannot be ignored. When machines are involved in the selling and buying process, in many cases, unfortunately, this is ignored.
Clearly, technology is here to stay, and I suspect that not many organisations will return to the days of entirely manual, human-driven experiences. I also suspect that very few organisations are not seriously looking at how technology can be taught to better personalise such contact, rather than simply adopting a faceless, anonymous, one-for-everyone broadcast approach.
Personalisation has emerged as a killer application of cloud computing. Brands are looking to build intelligence from big data archives, use machine learning to learn from behaviours whilst using artificial intelligence to deliver brand experiences that better align with our wants and needs.
We can debate for hours about the pros and cons of how such techniques tread a careful line between being useful, invasive or downright intrusive but machine driven personalisation is here to stay. Incoming legislation like GDPR will of course help address such challenges and the organisations themselves are learning to avoid high profile false positives that many of us have also experienced.
As an organisation, Inspiretec exists to help organisations leverage technology, yet do so in a manner that it delivers the best possible end-user experience. We have years of experience in delivering personalisation products using our own IP and through trusted partners.
I hope that you find this guide useful to share some background, guidance and best practice that you might want to consider in improving how your machines engage with your customers.
What is personalisation?
In the age of the customer, merchants both online and offline are focusing on a common theme: How can I provide the best customer experience? To achieve the desired level of customer experience, one of the methods we turn to is personalisation.
A recent survey shows that 24 out of 25 retailers believe personalisation is a top three business priority. This is backed by a Forrester study, which states that 68% of firms believe that delivering personalised experiences is a priority for their businesses.
“Personalisation is the art of targeting a consumer directly using a customised experience based on their unique interests, buying behaviours, and demographics by delivering individual messages, promotions and product offerings.”
What is personalisation?
Everyone is unique. We like different food, drink, clothes, entertainment and we take different types of holidays.
Customers, regardless of industry sectors, prefer a personal touch when dealing with an organisation or a member of staff from that organisation. Making the consumer feel like an individual, rather than another shopper. So why try to tarnish everyone with the same brush?
Travel back a century, when superstores and the world wide web didn’t exist, personalised experiences were the norm rather than the exception. So the concept of personalisation isn’t new. Website personalisation isn’t new either. It originally became popular in the mid-1990’s with products such as ATG and Broadvision, which coined an early form of “personalisation” as a means of enhancing a website to a level of functionality that improves the visitor experience.
Before diving deeper into personalisation and how it can work for brands, it’s important to note that personalisation is not customisation. Customisation is completed by the user, allowing them to tailor their experience on a particular website or the web in general. For example, a website having the ability to change its default colours and background is something the user can do to suit their needs mood etc. – this is customisation.
Besides the fact that customers now expect to be treated unique and receive a personalised experience, there are a number of technological and web factors that has made personalisation a reality.
Initially, only the biggest and the richest websites could afford to purchase and deploy the
technology required to discover and produce personalised content and features. Now the cost
of deploying a CMS capable of personalisation has dropped dramatically. Solutions from Drupal,
WordPress, Umbraco and our in-house platform, Holistic – all of which are open source products,
can be acquired for a fraction of the cost in previous years.
These systems have also become easier to maintain and use. There isn’t a need to have dedicated teams, additional hardware and resource or a large support package to manage personalised experiences.
We are now in an era where a plethora of content is available. If anything, too much content is floating around on the web meaning that organisations are struggling to get the right content in the right context at the right time in front of their audiences. A recent Janrain survey found almost 74% of website visitors felt frustrated when presented with irrelevant content. This problem, by nature, will only increase as markets become more competitive. Personalisation, however, bypasses this problem and operates within a more targeted environment.
What is personalisation?
Types of personalisation.
|There are two main types of personalisation.|
|Explicit and implicit. Each type of personalisation isn’t siloed. Both methods of personalisation can||Explicit||Implicit|
|be combined to form a single user experience to provide the ultimate personalised experience.|
What is personalisation?
Explicit personalisation is the delivery of tailored content or functionality based on hard-fact information gathered. Through a user performing an action such as filling out a form, creating an account or making a purchase, for example. The information could be stored against a profile or user account in a CRM, ERP or website portal.
A profile is a collection of settings for the user that tells the system to give access to a defined set of content and functionality based on the available data, tailoring the experience for a closer connection with that user. Examples of information could be gender, preferences, likes and dislikes, membership information or purchase habits.
Tools as such can find out a lot about the user without asking a single question. Details such as location (Geo IP), browser, connection speed, screen size, screen colours, repeat visits, source and other details related to that website visit can all be found and then used to provide rules to deliver personalised content in the future.
Implicit personalisation could be seen as the ultimate goal of every personalised website because unlike explicit personalisation, it doesn’t require a user to log-in or provide details. Sometimes called behavioural tracking, the concept behind implicit personalisation is a user’s clicking activity
on a site that is monitored and tracked, allowing an organisation to follow the behaviour of the visitor. Then based on where they go or what they do, bring back personalised content or recommendations specifically for them.
This type of personalisation may seem less obvious to the visitor than with explicit. Amazon’s on-site recommendations would fall in this category, however, the smart emails that Amazon sends out to its customers are powered by the former. Both are powerful in their own ways, but together, they provide relevant offers and information that can really boost conversions.
Understanding audiences: Data.
Before being able to provide personalised content, an organisation needs to know and understand its audiences.
This is where a CRM, analytics and web usage data amongst a number of other systems, play an important role. Understanding the differences and similarities between users allows the creation of rules and reasons behind personalised content. In the most basic sense, personalisation is the use of customer information to deliver the right content for each visitor, every time. That is only achieved only with data.
We express this with a simple formula:
Data + Content = Personalisation.
The better you know your audience, and the better you know of which offers to deliver when and where, the more relevant your content will be, and more powerfully will it drive customer engagement and conversions. For most companies, obtaining data on audiences or clients isn’t too hard – organisations have access to massive amounts of data from websites, emails, e-commerce systems and offline sources.
Quality over quantity isn’t quite the case here, nor the other way round. The reality is that quality and quantity go hand in hand. The more data you have on your audiences the better understanding you can have on their needs. It’s key that this data is relevant, up-to-date and most importantly correct. Therefore, a key factor when starting to look into audience data is the bringing of it all together into a single view: Data centralisation.
Without merging the different datasets, noted as events in the diagram, you lack a complete understanding of a customer’s interactions. Creating a “single customer view” is essential to power any kind of personalisation and should be the first port of call when tackling a personalisation strategy.
Armed with a single view, only then can we delve into the data to check and review the quality of the information we have on users. Having good data will produce more accurate services, systems, trends, recommendations and analytics. The better quality the data, the better you can understand your markets, your audiences and the more clever you can become with your personalisation rules, logic and goals.
Understanding audiences: Data.
Single user view
Customer and event data
Data can be acquired in many ways. However, data quality is something a provider will need to oversee once that data is acquired. The features of quality data can be categorised as follows:
The indication of whether the data necessary to meet the current and future business information demands are available in the data resource. Missing data is missing intelligence.
This is the validity of the data put into a system. For example, a simple validation rule on an input
field for “email address” (such as checking there is a valid @ symbol in the field content) on the Contact Us form on a website which helps to validate the data to ensure it’s in a correct format to be pushed into a CRM.
This refers to whether the data values stored for an object are the correct values for its type. To be correct, a data element must be the right value and must be represented in a consistent and unambiguous form. For example, a birthday could be 15th January 1979. If the data element within an individual field expects a UK date, then storing this data as a US date e.g. 01/15/1979 would be inaccurate because it is an incorrect value.
Understanding audiences: Data.
Data needs to be consistent on all levels and across all data. The record should match on all platforms. Taking the birth date example, we’d expect all dates stored in the system to be stored in a UK format rather than one field being US date format while ten others are within a UK data format.
Data integrity is the assurance that information is unchanged from its source, and has not been accidentally (through programming errors), or maliciously (through breaches or hacks) modified, altered or destroyed.
Timeliness references whether the information is available, when it is expected and when it is needed. Some would argue that timeliness is the most important factor of data quality due to the increasing demand for real-time data-driven decisions across a wide range of industries.
Are all data items
recorded, is all necessary
|Is the data available||Does data match|
|as needed?||the rules?|
|Are the relations||Accuracy|
|Does the data reflect|
|between entities and|
|attributes consistent||the real-world objects|
|within and outside the||or variable source?|
Can we match the
data set across data
stores, are there
Understanding audiences: Data.
There are different ways of maintaining the quality of data. At a very minimum, training users to input information into systems and how to use the data contained in the system is required. But the desired scenario is a position where data can be pushed automatically into the system direct from the source.
It’s now much easier to integrate CRM and data systems with input systems such as websites. Thanks to the APIs available for most CRM systems, partners and agencies can integrate contact, enquiry and quote forms to automatically push content directly into a system. Gone are the days of receiving an email or email address, then getting a member of staff to re-input the data into a system, increasing the possibility of errors.
Data control systems should be in place to maintain the quality of the data in an effort to minimise the human error. This could be allowing the system to pick a threshold point where all the data after a certain point or period is regarded useless, to make sure the data that stays in the system is useful, and to have a clear definition or schema of what is needed.
An example would be with classifying reviews from customer feedback. Some customers may give a good review, some may give bad, and some may comment with something irrelevant. An ideal system would only keep the useful reviews for the data to be analysed. At times, the challenge is not the data but rather the data integrated from external systems that cannot be replaced.
Regardless of the type of the data, never underestimate the power of it. We have examples of systems in the past where seemingly irrelevant data has made large impacts. For example, a personalisation system where the data attribute with the biggest impact on a user’s preferences was their User ID (unique ID). Upon investigation, we found the ID was incremental, meaning those with lower ID values had been using the system longer, and showed different buying habits to those with higher IDs.
High-quality data must be clean, updated, and comprehensive. For the data to be cleaned, the collection process should be set to a high standard. Systems should reject dirty data automatically that does not give sufficient, accurate and consistent results. Enabling tools and systems on the web to drive the process is one way of making sure the data obtained is clean. This helps to make sure the reports are as accurate as possible. Up-to-date data is important especially in the travel and tourism industry, as it allows for predictive analytics. Updated data also means when a customer gets his booking, real-time prices appear and the data gets uploaded into the decision source.
Armed with a single view of your datasets with quality data, you should now be able to move into the next steps of personalisation.
Which personalisation method should you look at first? In the following sections we’ve detailed the four main methods of personalisation with the aim of identifying the correct method for your particular organisational circumstance.
They are user profiling, machine learning, filtering, and trend and data analysis.
Web personalisation systems are used to enhance the user experience by providing tailor-made services based on the user’s interests and preferences, typically stored in user profiles.
A user profile is a collection of information associated with a user. It can be defined as the explicit digital representation of the identity of the user based on their interactions with an organisation or brand. The user profile helps by associating characteristics with a user and ascertains the interactive behaviour of the user along with preferences.
User profiling can be defined as the process of identifying the data about a user’s interests or a user group’s interest, then providing relevant content to match that particular interest. With this approach, customers are grouped together into buckets based on commonalities. Typically, we use traditional categories—geographic, demographic and behavioural to create segments.
Example use cases of this approach include segmenting by age, gender, income levels, hobbies, location, or what type of online behaviours are observed, such as, what customers click, like, or historically have purchased from an organisation.
|Holidays.com||Holidays Flights Car Hire|
|Search this site|
|Holiday in the Med|
|Find out more|
- Bristol > Greece/Malta
- June 2015
- 2 Adults / 2 Children
- £700 or less
“Best family friendly hotels
Clicked on Greece
internal banner link.
|Holidays.com||Holidays Flights Car Hire|
|Search this site|
|Holiday in the Med|
|Find out more|
|• Worldwide trips|
|Short haul||• London airports|
|• Local airports||• Business class|
|• Family friendly|
|• £700 or less||Short haul lux|
|• Near coast|
|• Luxury hotels|
|• Business class|
Levels of profiling.
As used by many businesses, segmentation is often an effective but manual process, with little sophistication. Using automated segmentation this process can be refined, both by allowing the creation of a greater number of segments, but also to allow the real-time updating of the segments.
Inspiretec use machine learning-based segmentation with our Hot Leads tool. We use a specific algorithm set to monitor website behaviour and predict the top 1% of people with the highest propensity to purchase. The machine uses past purchase history combined with other activity (searches, reviews, questions, offer views, page views) to create a profile of what a buyer looks like which is retrained to ensure accuracy.
This highly targeted segment can then be called, or promoted to online, to encourage those visitors to make a purchase or to consider making this purchase in the near future. It gives sales and marketing team more intelligence to tailor communications, increasing commissions for that salesperson, increasing the number of qualified leads from marketers, and generating more revenue for the organisation.
For more, please speak to our Product team.
|Few data points,||Propensity to buy,|
|large segments.||collaborative filtering.|
|Macro||One to one|
|More data points e.g.||Real-time, multi-channel,|
|lifestyle, fewer segments.||individual-based preferences.|
As the industries and markets we operate within get more competitive – and will keep doing so in 2018, specifically in travel and tourism, the need for marketers to be smarter increases.
Machine learning and artificial intelligence are appearing everywhere in marketing right now.
If there’s a high-waste, menial task to be done — or something you can’t do more than a few times before it becomes impossible to manage — there’s usually a way to get it done without human involvement. Machine learning enables systems to learn and then perform tasks on its own.
Many organisations that deal with large amounts of data use machine learning and its applications for many different solutions such as classifying, clustering, predicting and mining the data in some way. By giving the system a set of rules and instructions to follow, it has the ability to learn itself and then perform the rules on new input data. It can be used to analyse this data in a way a human cannot, solving problems which have been around the travel industry for some time. It can also solve problems and identify patterns we can’t always see.
Machine learning methods.
There are different learning methods available such as supervised learning, unsupervised learning and semi-supervised learning algorithms.
Supervised learning is similar to having a teacher instructing on what to do to achieve a clear objective. This is mostly done for classification (used when data is output as a category) and regression (used when data is a real value) problems where data is labelled and the real output is known. The aim is
to map the function so when new data is presented to the model, the prediction should be as accurate as possible or near enough to the previous data set.
Unsupervised learning is generally the case when dealing with raw data. Data is fed into the system without knowing the output, leaving it to the system to train and learn itself through its training patterns. Algorithms are left to find their own patterns and discover the structure of the data. It’s mostly used in clustering to discover related groupings in the data, such as grouping similar customers based on their booking history.
Semi-supervised learning sits somewhere between supervised and unsupervised learning, where some of the data is labelled with known attributes and some are not. An example could be a library of images, where some images are labelled (with destination, category etc.) but the rest are blank. The system will use a hybrid approach to make use of more known and unknown data.
Machine learning in
the travel industry.
Machine learning is used within the travel industry to enhance sales and to offer customers reliable or more accurate services. The use of predictive analytics is also apparent within the travel industry. Using predictive modelling combined with large amounts of data can empower airlines, travel agencies, airports and the travellers themselves. One of the predictive analytic components is a “recommender system” for travel products (flights, hotels and extras).
There are hundreds of possible flight combinations connecting London and Dubai, but when combining all the additional services and extras, this number becomes well into the tens of thousands. However, is the combination of services relevant to a certain passenger? Which hotel or apartment is most suitable for a group of friends who booked a cruise holiday for next summer? Recommendation systems provide the answers to such questions. It’s a win-win function for travel providers and customers by providing the most valuable and relevant option for the users while maximising bookings and revenues in the process.
Big travel providers such as Booking.com use machine learning for reviewing the sequence of words in the text inputted by users across their site which corresponds to categories such as cities, accommodation, facilities, etc.. They call this problem a “Named entity classification task.” All these interactions create vast amounts of structured and semi-structured information that contain valuable insights about a user’s experience on the website, on their accommodations and on the places they have visited.
Booking.com monitor the user’s navigation around their site and any text they input into the website to acquire more information about their preferences. This enables them to return a result that is more relevant to that customer. They take as much data as possible into consideration so their results are adapted dynamically.
Other big providers such as Kayak use machine learning for removing duplicates from the results that come through their aggregator. Aviation authorities use machine learning and optimisation techniques to find the best air routes, timing, flights dynamic pricing and staff allocation.
Travel and tourism organisations also use machine learning for marketing purposes, to send offers to customers. Examples include personalised and tailored special holiday offers via email marketing and other forms of direct marketing. It’s important to note that treating data as such doesn’t just benefit the sales and marketing teams with hitting their targets. It can assist all levels and departments of an organisation with its processes.
We also see the hotel sector increasingly using machine learning, to help manage and review their
operations and customer insights. Reward programmes are being used by hotels enabling them to
collect data on their customers, so they can track customers for increasing levels of personalisation. So when you phone in or walk in to book a hotel, you would present a reward reference number, then the hotel would know instantly whether you are a good spender, what kind of room you prefer, whether to offer you extras like Wi-Fi in your room rate. On the internal side, hotel operations are significantly helped by demand forecasting and pricing for rooms. This enables them to reduce the number of empty beds and make the most of seasonal offer opportunities.
Every organisation, shop, operator, agency or brand can benefit from this kind
of intelligence. Please speak to us should you wish to explore how machine learning can work for your travel organisation.
Product recommendations are common features of most e-commerce websites and commerce systems. The basic technology behind product-to-product recommendations is content-based filtering and collaborative filtering.
These techniques, made famous by Amazon and Netflix, use an aggregate of browsing behaviours of shoppers visiting the site to organise the recommendations usually presented “people who viewed this also viewed these other items” or “people who bought this item also bought these other items.” The result? The website can showcase a limited and specific set of items that are likely to be of interest to a customer who is looking at a particular item or has placed items in the cart.
Content-based filtering uses characteristics of an item to recommend it to a user’s profile. Commonly used for Netflix and Pandora’s recommendation engines. It’s like asking a colleague to recommend a holiday provider for you, whereby it’s natural for a colleague to then ask you what kind of holidays you are looking for. From there, that colleague many think of a few places similar to what you have told them you have been to or liked previously.
Collaborative filtering, by comparison, and which is explained in more depth further along in this section, is more complex and is completed via recommendation systems and machine learning. But content-based filtering has some advantages worthy of consideration.
Since content-based filtering relies on the characteristics of objects themselves, results are more likely to be highly relevant to a user’s interests. This is more valuable to an organisation with large libraries of a single type of content such as streaming media services and subscriptions.
Easy to implement.
The data science behind content-based systems is straightforward compared to the highly sophisticated mathematics involved in creating a collaborative filtering system. The core work for implementing is assigning attributes.
Content-based filtering avoids a cold start problem. It still needs initial input from users to the system so it can start making recommendations. However, the quality of those early recommendations are likely to be much higher than collaborative filtering systems since it becomes robust after large-scale amounts of new data entries have been added and correlated.
To design a recommender system, we have two different avenues to choose from. One of the options to take, and which we would recommend, when content is large in size or diverse is collaborative filtering.
This is a technique used across social media, retail, and streaming services to name a few. The technical elements: maths, programming, machine learning techniques, appear difficult at first. However, the concept is straightforward. It’s based on the idea that if two or more people have the same interest in one product, they will probably have similar tastes in other products. This is also the case during web experiences.
Collaborative filtering relies on the behaviour of users, which gives it an advantage over content-based filtering.
Large user base.
Everyone uses the internet, and the more people using a service, the better the recommendation system will become without having to rely on subject area expertise.
Flexibility across different domains.
Suited to various sets of items, where content-based filtering relies on metadata while collaborative filtering is based on live activities on the web. Allowing the connection between two or more items that could be totally different however relevant to some sets of users.
More serendipitous insights are produced.
When designing a recommendation system, accuracy isn’t the highest priority because there will never be 100% accuracy. Most users have different interests that span different subsets in the data, which can lead to more diverse recommendations.
More nuance around items are captured.
Even a well-designed content-based filtering system will only capture some features of a certain item. By relying on actual user’s behaviour on the web, the system can recommend items that have a greater similarity with one another, rather than a limited comparison of their features.
Collaborative filtering methods.
There are two methods of collaborative filtering:
Collaborative filtering was developed by Amazon, which shows the relationship between different items based on which items are purchased together. The more often two items are purchased together, appear in the same shopping cart together or are together in a user’s activity, the closer the system will put them together. So, when a customer books a holiday, the system could recommend extras such as car rental, extra luggage if other users often purchase these items at the same time.
User-based similarity is calculated based on ratings, likes, views and other activity as opposed to calculating the item similarity. To recommend an item, the system looks at other users with similar behaviours, then suggest items over other items they may like. For example, if you have booked a number of holidays, the system will see you in a certain way based on preferences such as flights, destinations, price, star ratings etc. and will use that information to recommend to others who show similar behaviours.
User-based filtering Item-based filtering
The recommender system has no idea why any of the items are related to one another, it only knows when they are being placed in the same basket together, or booked by other people with similar preferences. This can be a shortcoming as well as an advantage when items that need to be filtered are heterogeneous, as in social networks or online retailers.
Measuring item-based or user-based collaborative filtering is done using different machine learning techniques depending on which is more efficient for that particular case and the nature of data involved. If the data is dense, Euclidean distance measure can be used. In most cases where the data is sparse (user ratings), then Cosine similarity is used to measure the distance. Other measures such as K-NN nearest neighbours, Pearson correlation coefficient can be used and plenty of open source libraries are available.
Measurement techniques as such appear complex at first, however, once the platform administrator is trained on the system and its features, managing and in particular measuring the system, things become clearer.
Events and activities of users are often ambiguous. Looking for a holiday doesn’t really confirm that a person is looking to book or even take a holiday, and viewing a post doesn’t really confirm whether the user liked it or disliked it. That is why user ratings is one of the main aspects of collaborative filtering systems, but users don’t rate everything they like, and sometimes don’t rate anything at all. In machine learning, they replace the missing values with either 0 or convert them into average values.
Cold start is another problem that a recommendation system could face if there is no user history, since the system generally uses history to recommend items. This generally applies to new items as well as new users. Items that are highly viewed or bought get recommended a lot, however those without history would not make it to the recommendation engine so the system will not have good recommendations for new users. This problem could be reduced by learning the basic information
to jumpstart the user, for example, using data enrichment such as importing the social network details.
Complexity and expense could be a problem since the system can run into scalability issues when the number of items and users gets high (hundreds of millions), especially when the system needs to run the gathered recommendations in real time. Running a calculation of relationships offline during the night using batch processing, makes serving recommendations much faster even if the data isn’t being updated in real time.
Designing and building a recommendation system with collaborative or content-based filtering is a huge project that requires data science, engineering, and computational intelligence skills. Data processing and storage framework knowledge (i.e. Spark or Hadoop) is another required skill.
Programming skills are also required. However, the likes of Python, Scala, Java contain machine learning libraries that support the implementation of collaborative and content-based filtering which makes it easier to conduct statistical analysis tasks.
That said, many CRM or CMS systems come with recommenders out of the box so you don’t need in-house knowledge to run it, or many specialist systems are also available which are vertical specific.
Trend and data
Trend and data analysis.
The travel industry generates and handles high quantities of data related to reservations, costs, prices, customer feedback, product information and much more. Therefore, it leaves a long trail of data. We have touched upon data earlier, but not its measurement and how it can be used alone for personalisation.
Travellers generate a large amount of data from different channels and devices at different stages of the buying process (these can be referred to differently, but we usually consider the following categories: Interest, Awareness, Learn, Shop, Buy). New and existing travel organisations are looking for better ways to use the information generated from customers to provide effective, and profitable products and services. They look for solutions to forecast market trends and customer intentions. This is where big data analytics on large, different types of data is needed to bring actionable business insights.
Many of the big travel companies have already used and adopted big data analytics to deliver real-time, personalised and targeted travel experiences. Big data analytics offers different features and a key feature is a truly personalised customer experience. Big data analytics allows travel organisations to be more responsive and focused on customers’ needs. This is also the case for preferences based on personal data obtained from social media platforms. More accurate services and product recommendations bring in better customer stratification and customer loyalty.
Price is always a deciding factor for travellers when they shop around. Data analytics takes over the manual fare analysis and replaces it with smart automation by gathering and applying indexing, filtering and analysing existing live data from many different sources. Live analysis of prices offered by competitors will help service providers in creating better pricing strategies. Big data analysis enables time series forecast prediction over a period of time for better serving customer requirements.
Trend and data analysis.
Understanding buying patterns, feedback, data gathered from social media platforms, forums, call centre conversations etc. will allow organisations to identify customers’ intents, preferences, all contributing to a better service strategy. Tracing, tracking and analysing customers’ behaviour will help organisations to recommend more relevant products in the future.
In recent years, competition has increased within the travel markets. Service providers are looking to engage, attract and convert their customers for a more targeted marketing approach. Optimising the marketing efforts on specific travellers by customising the offers according to their requirements can be achieved with big data analytics. Service providers can gain more valuable insights by analysing large amounts of unstructured data, allowing them to deliver more specific offers or services at the right time, place and to the right customers. Not to mention via the right channel or device. Enabling GPS technology with data analytics on websites enables the tracking of customers and allows for location-relevant live offers.
This shows that “big data” is reshaping the travel industry. To be able to identify customer’s trends, travel patterns, business pulse and opportunities is becoming indispensable through a data analytics strategy. However, there are some challenges such as lack of data scientists, affordable infrastructure, and deployment cost. Early adopters of big data and data analytics will have the upper edge over rivals in this market every time. We strongly recommend data to be on every roadmap for any organisation operating with the travel and tourism industry going into 2018.
Trend analysis is often used to compare data over time and to identify consistent results and trends.
Organisations then develop strategies to respond to trends in line with business targets.
Such analysis helps understand how the business is performing and predict where current performance will lead. Usually to a loss or to a profit. It portrays a glimpse into how well they have achieved or how bad they are doing to solve issues or fix the problems.
Trend analysis can be used to improve different aspects of the business. Being able to identify the successful areas that are performing well to apply it to other areas or being able to identify poor performance to rectify an issue that is causing such performance or being able to gather evidence to support decision making.
There are different factors that contribute to good trend analysis, for example, setting key performance indicators – which is a good starting point for standard trend analysis. When deciding which key performance indicators to track and review for trends, the first element to review is the factors such as sales figures, costs and cash flow.
Trend and data analysis.
Sales trends is another area to use data to analyse patterns to track and monitor performance and predict future performance:
— Product categories selling well.
— Products with best-selling margins and best payment terms are prioritised.
— Performance of sales staff.
— Changing of conversion rates.
Also, measuring the financial trends which impact on the organisation:
— Measuring overall sales.
— Measuring the cost of goods.
— Measuring net profit.
Lastly, analysing other trends and factors that has an overall impact on the performance:
— Stock turnover.
— Terms and conditions (payment terms, debtor or creditor days).
— Hours of trading.
— Number of staff.
Data has the ability to solve a number of problems, just as it has the ability to drive an organisation’s growth by making more sales. Trend and data analysis operates closely with the other methods of personalisation mentioned in this section. It’s relevant to everything within an organisation no matter how big or small. A strategy is required to respond to the trends that appear that will need to align with overall organisational goals. We can only scratch the surface on data within this booklet.
For more information on the four methods of personalisation, as well as analysing your data, please get in touch with our Business Analysis tea