Writesof is a young company, a small team of engineers, data scientists, and product markters, each dedicated to developing advancements in natural language generation (NLG) and natural language processing (NLP) technologies.  Our hope, is that by using one of the best forms of digital feedback – the behaviors of customers on product pages – we can develop technologies that understand and respond to each unique customer in an individual manner and in near-real-time fashion.  One day, we hope that our technologies will be used in education, medical diagnosis and treatment, and for security and community awareness purposes.

Today’s leading NLG technologies are really good at writing objective narratives about some changes in numerical values.  Examples include sports articles, financial reports, and IT security communicaes.  Since 2014, our company’s founders have been developing NLG and NLP applications specifically for online retail.  Our systems use machine learning to understand every possible nuance of human interaction and engagement with product messages.  The writing that our NLG produces is intently subjective, laser focused on the individual user, creating language that appeals and resonates with an individual.

We help clients extract audience and user behavior insights in relationship to each page’s language.  We then pull this data via API or scheduled feed transmissions, into our NLG platform, where a unique brand-tuned NLG Persona begins to process how changes in user behavior data are reflected by differences in each product page’s message.  We use this data to score language performance, not page performance.  After some history, our NLG begins to learn how to make improvements to new and existing product pages, with respect to each unique audience group.

Five years ago, before Writesof was founded, our team aimed to build a single NLG platform to be used for product communications, ads, and social media posts.  We later recognized that, like human writers, machines can be trained to write with a certain style, in a particular brand voice.  We found that it is much more efficient to train NLG machines by their brand voice (total product communications footprint), building the system around known (already published) language features, for example, writing style, message tone, and word choice.  If you, for example, process every word published on Amazon today, you would find that there is very little similarity in grammar and diction, even within the same product category – this is due to the vast number of unique authors that have contributed to Amazon product descriptions.  On the other hand, if you run the same language processing on a brand site like Nordstrom.com, you would find far greater similarity in the writing style – millions of different authors contributing to Amazon product descriptions, versus hundreds of authors contributing to Nordstrom.com product descriptions.

Our natural language processing tools analyze metric-based data feeds such as user group profile data, traffic and customer journey data, as well as on-page user engagement and conversion data.  The system breaks down the data and other key performance indicators by relationships to linguistic attributes.  The NLP machines use machine learning to identify which types of language are engaging, informative, and persuasive, for a given audience group.

Our company is set apart from the other few natural language generation platforms in our unique ability to adapt messaging to certain audience features. At Writesof, our team is focused on product description generation that is precision-aimed at user groups and audiences. We call this technology Personalized NLG, language that is self-aware of audience behaviors and actions, adapting and editing language accordingly.

Initially our team aimed to use our NLG internally, but as we continued to develop, we learned just how powerful customer data can be for our specific type of NLG.   In 2018, we created a tool that enables us to process customer data without having direct access to raw data.  We provide this tool to clients, free of charge. We deliver it to the client with their full site corpus, processed and classified into linguistic chunks – all text and relevant meta data on all product pages.

To use this tool locally on a machine, the client will import as little or as much user data as they desire, for any particular groups of product pages.   The tool will then begin assigning benchmark values to each word of text.  In this manner, the tool assigns benchmarks to all site language, not based on user engagement and conversion rates, but randomly encoded arbitrary values that serve as benchmarks for changes in performance.

Within 24 hours, the client can then run a 2nd analysis with updated user engagment data, by page.  In this execution, benchmark values change, not exclusively by correlations in engagement and conversion on the pages, but also by Writesof’s internal weighted scoring parameters that analyze the value of language by its connected parts and respective positions within the content.  For example, adding the pronoun, “this”, to a lower section of content is unlikely to influence change in user behavior on the page.  On the other hand, adding the imperative verb, “Do”, to the first bullet point on the page, has a much higher probability of influencing some change in customer behavior.  Additionally, since the words “this” and “Do” are far apart from each other on the page, any contextual or communicative relativity between the two words is minimal, at best.

Each word on a product page is valued by multiple interdependent relationships, mostly to do with the language.  The user data sends signals to our systems informing them how well all of the language works together with the user.  Sometimes a single word edit can have a huge impact, while other times a complete overhaul revision can result in no change whatsoever.  Our systems are designed to know the wide ranging ways that language edits result in improvements in engagment and conversion metrics.

User data is used as feedback to validate our NLG decision making.  User data drives our machine learning and helps our system make better decisions on piecing together language into persuasive consumable forms.

This tool was a key innovation for us, as user engagement and other user feedback is necessary to power our unique, retail-focused machine learning systems.

The extracted data features are meaningless to any prying eyes – including our team – because each piece of customer data is transformed into thousands of language values that represent a single product description.  The language values do reflect user engagment and conversion performance data to a significant degree, but there are literally hundreds of other valuation parameters, connected to all other points of lanugage.  For example, the value of the word “the” will change slightly, as it is used in different ways on the page.  The value of the word “color” will, however, substantially vary.

In our first year, we had the opportunity to work with and learn from leading global retailers and agencies.  They taught us…

We worked with learned about their marketing and writing department workflowsThese initial online retailers and s global agencies, and brick-and-mortar stores  learned from leadingit was the first advanced swas the first product that we launched.  In our first year, we had the opportunity to work with leading international retailers, brick-and-mortar and digitally native, learning about their workflows and integration needs.

Imagine an army of brand-trained writers that can produce thousands of unique narratives on demand, performing deep analysis of how each published word performs in each channel and with each audience segment.  If your firm is fortunate enough to have an army of writers, linguistics data scientists, and e-commerce data analysts,  you just might have the right ingredients to achieve such productivity.  Your success will depend on managing teams, workflows, and technologies as the clock keeps ticking and competitors relentlessly fight for your customers.

At Writesof, we develop advanced artificial intelligence systems that generate written content with a similar model to the one described above.  Our Automated Copywriting software, “AC”, is a natural language generation system developed specifically for product copy writing.  If you sell thousands of products online, AC will do the heavy lifting, writing product descriptions, advertisement, push notifications, and email campaigns, all designed for optimal personalization and engagement strategies that fit your brand and campaign initiatives.

Writesof’s NLG technology generates narratives that start with structured and unstructured data inputs, and result in high-value content that can be generated on demand, and on message.  AC is built for omnichannel retailers, web only retailers, product manufacturers and distributors, and consumer product brands.  Instead of relying on humans to interpret market-based and linguistic performance relationships, we use powerful cloud computers and proprietary big data analytics that feed our NLG with intelligent insights that produce more effective, goal-oriented messages.

It’s like having a team of data scientists working 24/7 interpreting customer engagement for every word of every message delivered to consumers.  And it’s automated.    AC interprets message performance for each customer that you collect data on, meanwhile factoring in market data and audience segment insights into its decision algorithms.

Automated Copywriting is not your average natural language generation platform. It is a linguistics data analyst that performs deep analysis of customer and market insights, making surgical decisions on each copy writing task.  In a matter of minutes, it can examine at every word of copy your organization has ever written, hundreds of thousands of words, assessing your brand messaging fingerprint in terms of identity (voice), writing style, grammar use, and lexical usage.  It writes based on performance data, considering every measurable linguistic feature of a written composition, both literary and qualitative. The NLG knows which elements of a message are higher performing and those that are relatively underperforming, and it can adapt to performance metric feeds in near-real-time, automatically editing content to improve message performance for a particular user, segment, or channel.

Talented writers are an indispensable resource for most organizations.
Business communications, copy writing, and clear and accurate information reporting are essential to the success of any firm that engages with a large audience. Writers are the change agents. Their words can evoke action and change. The truth is, good writers are scarce, and while demand for good copy is increasing, even subject matter or industry expert writers find it increasingly challenging to keep up. For many creative teams, the pressure on writers to perform, especially when writing workflows are constrained by increasingly complex data analysis.  Data is becoming more voluminous and granular, with machine learning application mounting on top of existing workflow constraints.

Data utility, however, is only as good as a data scientist’s ability to first, interpret it, second, transmit the interpretation to marketing, and third, train and guide writers to apply data insights to copyediting workflows.  The additional costs, time, energy, and effort required to implement high quality data analysis into the copy writing team is placing many organizations under extreme productivity constraints.  Many have learned to adapt, but most are just hanging on with the hopes that their competitors are operating on a level playing field, with similar cost and productivity constraints.

In many cases data used in the marketing department must be interpreted, transmitted, reinterpreted by other departments, and finally utilized and implemented in the form of strategies and initiatives.  This cycle can be brutal for firms, inefficient and costly, such that many companies are underutilizing their data.  In fact, you could say that all companies that have not maximized their potential in artificial intelligence and machine learning are underutilizing their data.  Don’t feel bad, even tech giants like Google could not say with a straight face that they have maximized their potential in AI and machine learning.

If your copywriters have a good understanding of keyword data and audience engagement data, you most likely have cutting-edge technology partners that have contributed to that reality.  For most companies, this is not the case, especially on audience engagement data.  There are simply too many variables to compute, and without having several integrated systems that all but instruct writers on what to write, it is humanly impossible to calculate precisely targeted messages.  In recent years, marketing personalization automation technologies have given marketers a glimpse of how data can work autonomously to make decisions.  Writesof uses similar techniques and technologies, relying on data to generate messages, preventing less informed human decisions.  Writers are not replaced, instead they are given a new mission – to focus on concepts and angles that are fed into the NLG system, where automation takes over.  Instead of repetitive and mundane writing of hundreds of messages, AC’s NLG does the heavy lifting, writing thousands, even millions, of messages, where decisions are made based on the data.  It should be said that the foundation of all messaging is derived from the human minds of writers, but bear in mind that, like the periodic table, writing has a finite set of variables and composite structures.   AC is a machine that learns how real human writers use the elements of language, then predicts new ways to use the language, with performance determined by audience feedback, from data insights.

In today’s data-critical competitive landscape, data scientists must be the bridge that communicates data to all of the various teams within an organization.  In many cases, writers have become inundated with multiple streams of incoming data that must be analyzed and put to profitable use as fast as possible.  Consumer behavior and competitive environments can change quickly and the reports and dashboards that today’s most advanced writing teams have access to, however useful they may be, still require interdepartmental communications and strategic collaboration before sound decisions can be made.  Decisions that influence message strategy are more often than not a lesser priority, due to productivity demand and other interdepartmental workflow constraints.

Multivariate message analytics and A/B testing are used to determine which words, stylistic devices, angles, and hooks are outperforming others.  The greater the production volume in the copywriting department, the greater the likelihood of making substantial improvements to overall message strategy.  The cost-benefit tradeoff between hiring additonal writers and data scientists does not bode well for most organizations, for fear of short and long term reductions in ROI.  The amount of new data that companies have access to is staggering and simply too much for writers to effectively manage, much less having to increase message quality and word count productivity at the same time.

Natural language generation, template editors, and rapid-editing content systems have helped make it possible for companies to make some improvement in these areas, but there is a definite trade-off with resource requirements to implement these new technologies to an optimal organizational utility.  Natural language generation enables writers to produce large volumes of topically focused narratives with minimal, mostly front-loaded, time, energy and effort.  However, in many cases writers are reluctant to adopt this new technology because it is disruptive to most writers’ natural workflows.

In its basic form, NLG, produces template-based narratives, with a substantial amount of overlap in language usage.  For copy writers that set a high priority for improving search visibility (SEO), this makes template-based NLG solutions a non-starter.  These basic NLG systems simply can not create highly unique content.  The result, for example, could be tens of thousands of product pages that recycle the same language strings over and over, in multiple channels.  This serves very little advantage in search engine optimization and internal search.  These basic natural language generation systems do not typically do a good job at writing longer form narratives.  They can effectively write 50-word or even 100-word pages of content, which does not leave much room for optimally distributed keyword placement.

Copywriters know that a single product can be associated with 50-100 keywords and long-tail key phrases, therefore a 50-word product description simply can not meet audience needs, not only in terms of search visibility and discoverability, but also with respect to sales communication.  Countless studies have shown, less content equates to less chance of converting page visitors.  Above the fold content should be kept concise for type “A” consumers and below-the-fold content should offer as many details as possible about the “product experience”, features, and benefits, in a particular strategically placed order.

Other natural language generation systems have been designed to report and create “stories” about numbers.  These systems are still quite impressive, but they primarily use quantitative data and non-numerical data that is extracted from quantitative features.   These NLG systems can write endlessly about financial event, sports events, business events, and just about any other subject or topic where tons of quantitative data is readily available and constantly changing.  As a business model, these NLG systems are superbly designed.  For example, a single NLG platform can produce financial articles about each and every NYSE and NASDAQ traded company, on a daily basis, even producing several different articles at different times of the day, based on quantitatively-triggered events, such as a percentage change in stock price.  These systems can likewise write about sports, reporting on hundreds of professional and collegiate games and matches that occur around the world on a daily basis.  The primary requirement for NLG systems to generate such articles are player player stats, which are delivered in a structured data feed, reporting on variable statistics of each player, on a timeline and event basis.   Sports, business, and finance content publishers are today using NLG to publish thousands of articles per day, and as these systems become more intelligent, publishers become more dependent and bought-in to the new system.  However, there is a far more contentious balance between journalistic and business writers and the machines that are creeping into their workplace.  Even still, writers are needed to keep the machines up on “learning” new concepts, which is ironic that humans are participating in their eventual replacement.  Copywriting and marketing is not quite the same, as writers are more interested in direct ROI than audience recognition and prestige.