Webinar: Unlock the Future of Marketing Print

We recently hosted an engaging webinar in partnership with the Procurement Foundry, ‘Unlocking the Future of Marketing Print’, bringing together procurement and marketing professionals to explore how to optimize print management processes and maximiser the impact of print campaigns. Hosted by Mike Cadieux, Founder and CEO of the Procurement Foundry, and featuring an expert panel including perspectives from a client, a consultant, and a solution provider, the session delivered a 360° view of the evolving print landscape. Our speakers included:

  • Tony Massey, Executive Director, APS Group
  • Heather Padgett, Senior Commercial Marketing Manager, HOYA Vision Care North America
  • Fre Rammeloo, CEO, Dexter Global Business Solutions Inc.

Print Media: Far from obsolete

The discussion kicked off with a look at what print media means today and the consensus was anything tangible that you can print on to display a brand message. Far from dying out, print is experiencing a renaissance and actually becoming a more unique communication method against its digital counterpart.

While digital dominates the marketing mix, print offers something tactile and immersive that can’t always be replicated online. From packaging that can’t be digitized to direct mail making a comeback, brands are rediscovering print’s ability to create meaningful, lasting impressions – especially as digital fatigue sets in among consumers.

Sustainability: More than a buzzword

It’s clear there is a growing demand for sustainability within the printing industry with it becoming more of a priority than just a ‘nice to have’. Becoming more aware and increasingly critical, both consumers and employees are driving change and pushing brands to adopt greener and more ethical practices with carbon offsetting and sustainability initiatives becoming more popular as solutions.

Attendees heard how the industry is innovating through recyclable materials, vegetable inks and on-demand printing to minimize waste. It was also highlighted that digital isn’t carbon-neutral either, and benchmarking environmental impacts across both channels is vital to help develop future strategy.

A blended approach for maximum impact

Rather than choosing between print and digital, speakers emphasized the power of using both mediums in partnership to encourage the consumers path to purchase. Strategies such as personalized print campaigns supported by digital tracking, QR codes, and A/B testing were showcased as ways to measure ROI and optimize the media mix. Asking consumers their communication preferences and innovating with customized communications were flagged as essential for ensuring print retains its share of voice.

Technology transforming print communications

The webinar explored how technology and AI are reshaping the print industry. From printed electronics and on-demand multilingual collateral to data-driven personalization, the possibilities are expanding rapidly. AI’s ability to analyze data, speed up workflows, and generate creative ideas were all highlighted, it can even offer intelligence to predict how impactful a piece of creative or a campaign is going to be and then suggest improvements as a result.

Most brands are curious and dipping their toe into AI with some appearing more embracing whilst others more cautious, although this can sometimes be sector dependent. However, everyone agreed that human oversight remains critical to keep campaigns authentic and emotionally resonant.

So, what is a good corporate sourcing strategy?

The final topic focused on building a strong corporate sourcing strategy for print. Collaboration between procurement, marketing, and strategic partners emerged as key. Attendees were encouraged to engage printing partners early, consider innovation and sustainability alongside cost, and develop positive cross-departmental relationships to ensure print delivers value across the business. Working together with experts and stakeholders, both internally and externally, can provide deeper knowledge and understanding and better outcomes for all.

 

This webinar was packed with insights, expert perspectives and practical strategies for procurement and marketing professionals alike. To receive the full recording of the webinar, please fill in your details and a member of our team will be in touch with you soon.

Print is APS’s heritage, it’s where we began, talk to us today: [email protected].

Request a recording of Unlock the Future of Marketing Print

Webinar: Retail Reimagined – The new era of in-store experiences

We recently hosted a webinar exploring the evolution of retail spaces, with a forward-looking lens on the trends transforming the sector in 2025 and beyond. The session was packed with powerful insights, provocative predictions, and practical inspiration for brands striving to stay relevant in a rapidly shifting retail landscape.

Hosted by George Smart, Director of Customer Solutions and Strategic Growth at APS, with expert speakers Kate Ancketill, CEO of GDR and industry renowned retail futurist, Finn Lawton, Senior Strategist, and Tony Massey, Executive Director, both APS. The event set the stage for a thought-provoking conversation on what’s driving consumer expectations and how brands can respond by inspiring awe and re-establishing an emotional connection.

Finn kicked off the insights by painting a picture of the current retail terrain. He shared how rising economic pressures, digital acceleration, and evolving lifestyles are reshaping how consumers shop – and what they expect in return. He underlined the importance of staying true to your brand and authentically embedding innovation rather than chasing ‘hype’.

AI is now a non-negotiable element of modern retail, and with the rise of agentic AI Kate discussed how with technology taking care of our more transactional purchases, people are free to do the ‘experiencing’ and increasingly crave more human-centric interactions.

People want brands to help them feel something – 61% of consumers say they want intense emotional experiences, while 63% seek multi-sensory encounters (Wunderman Thompson, The Age of Re-enchantment, 2023).

Consumers are seeking more than just transactions, they’re looking for added value, meaningful engagement and purpose-driven spaces where they can find moments of magic. Kate explored the rising demand for collective experiences including gamified product visualisation and enveloping 4D environments, to immersive storytelling and pop-ups curated around iconic brand features. She highlighted the importance of biophilic design, wellbeing-led experiences like communal saunas and breathwork spaces, and fully integrated AI-powered avatars bringing fun, personalisation and expertise to in-store shopping.

“The antidote to the ‘age of anxiety’ is awe-driven joy.”

Today’s retail must deliver on both practicality and purpose, to do this successfully and provide what your customers need and want it’s vital that you get to know who they are. From multi-functional spaces and circular economy initiatives including second-hand selling/buying and rental models, meaningful brand experiences are the future of retail. This webinar was a call to action for companies to reimagine their physical environments as places of wonder, connection and purpose.

The session concluded with Tony sharing how APS has helped leading retailers bring innovation to life with examples including gamification and cost-effective experiential design. Brands require creative strategies to cut through in an increasingly competitive space which is something our talented Retail and Brand Experience team are passionate about delivering. Talk to us today, [email protected].

Want to see the full trend forecast, including case studies and standout activations?

To receive a recording of the webinar, please fill in your details and a member of our team will be in touch with you soon.

Request a recording of Retail Reimagined

Webinar: Unlocking Creativity with AI

Artificial Intelligence, or AI, is a fascinating topic that cannot be avoided as it rapidly becomes more progressive, sophisticated and integrated within our day-to-day life. We brought together Gabrielle Robitaille, Associate Director of Digital Policy and AI Community Lead from the WFA, Maartje van Beek, Creative and Content Business Director and Len Borghuis, Motion Graphic Designer, both of APS, to discuss how the technology can realistically be used within your organisation, particularly within the creative process.

Hosted by George Smart of APS, the session saw these industry experts dive into all aspects of this revolutionary tool starting with some insightful research conducted by the WFA investigating how brands are currently using AI, where they are up to in their journey, and the main roadblocks to AI adoption.

At the time of publishing, of the companies using AI 84% say they are still in the very early awareness or development stages, with only 16% advising they are at a mature stage or further.

This shows there is still a tentativeness around this complex topic whilst brands are still learning how to navigate the risks, with legal concerns, upskilling and ethical concerns topping the list of apprehensions in relation to the technology.

For those businesses that have started to dip into the world of AI there are some extremely exciting developments into how it can aid the creative process, and it is no doubt transforming the marketing industry. Maartje presented some interesting use cases of how APS has used AI for both creative content and creative production, showing how automation can speed up repetitive tasks delivering efficiencies and saving resource for other creative projects that require a more human touch.

It’s not to say AI is perfect as there are still faults, it’s knowing how to work around these flaws and use AI effectively that can really give your brand a competitive edge. Len talks more about upcoming developments and trends, both good and bad, to give a fantastic, well-rounded insight into the way this technology is moving forward.

Here at APS, we thrive on innovation and aim to keep abreast of ever-evolving technologies to find the best solutions for our clients, enabling them to maximise efficiencies in both resource and budget. Our team will work with you to make innovative yet practical recommendations. Talk to us today, [email protected].

To receive a recording of the webinar, please fill in your details and a member of our team will be in touch with you soon.

Request a recording of Unlocking Creativity with AI

AI Beyond benefits our ethical responsibilities in its utilisation

Everyone in business is talking about AI and nearly everyone has an opinion. Business owners want to bring it on board, realise its benefits, harness its power – and no organisation wants to get left behind where there are tangible, measurable improvements to be gained. Employees are divided in opinion, with some confident that AI will add value to their work streamline operations and eliminate repetitive processes, while others are literally afraid for their jobs.

Irrespective of the debate and discussion around its use, however, AI is here to stay; underpinning an explosion of new, powerful technologies with the potential to change the world. And with great power comes great responsibility – it is vital for organisations adopting AI to be alert to the possible negatives, scanning for biases and helping put legislative safeguarding into effect to protect consumers and users. Read more about integrating AI into the workplace here.

What is AI?

In order to better understand both the likely positives and negatives around adopting AI, we need to take a closer look at what it does. We all know that AI uses intelligent agents – systems that can reason, learn and act autonomously, much as we do ourselves. And like human beings, the more information these agents are fed around specific tasks, the better they become at carrying them out. Types of artificial intelligence include natural language processing (NLP), computer vision, robotics, machine learning and deep learning; in which artificial neural networks learn from data.

AI is automating many tasks once done by humans, improving efficiency and speed and even making better decisions than us. Businesses have seized on the opportunities it affords for streamlining operations and cutting costs. AI is a disruptor too, since its capabilities have the potential to displace many jobs, taking the place of human workers – but it creates other jobs and can assist humans by removing the need to carry out repetitive tasks, freeing up time for the more enjoyable, creative aspects of our roles. AI is naturally the subject of many current discussions and debates, and while there are still so many unknowns around it, there is an undercurrent of urgency around exploring and identifying its potential effects on humanity, both positive and negative. This inevitably leads to questions around where our own responsibilities lie regarding ensuring that we do not inadvertently allow the AI technology we are developingusing to discriminate against or harm others.

The dangers of AI

Much has been said about the potential dangers of AI, with some of those most involved with creating and developing the technology delivering stark warnings around its possible future uses, such as Geoffrey Hinton, ‘Godfather of AI’ and Elon Musk, who wrote an open letter declaring it ‘a profound risk to humanity’. When we consider that everything within the scope of human imagination could both be automated and made efficiently deadly by harnessing the power of AI, their concerns feel real and immediate.

AI could be used to the detriment of mankind just as easily as for our benefit, driving more powerful disinformation campaigns, deadlier biological weapons and horribly efficient planning for enacting stifling social controls. With careful adoption and by building regulation in as we roll AI efficiencies out rather than allowing it to play catch-up, however, businesses and individuals can anticipate and avoid the most likely negative outcomes. Three impacts of AI that spring to mind for regulatory attention are bias, misinformation and environmental costs.

Environmental impact

Digital technologies have long been hailed as saviours of the environment, but as their use goes mainstream, this is no longer strictly true. The burden on hardware and resources is increasing in line with the uptake of digital products and services. The energy needed to train and run AI models is staggering. Some large language models product emissions in line with the aviation industry. Data centres need hundreds of thousands of gallons of water a day for cooling (leading to initiatives to place them next to swimming poolswhere they can be used to keep the water warm – or, in Finland, to heat hundreds of homes). ICT industry emissions worldwide are expected to reach 14% of global emissions by 2040, with communications networks and data centres being the heaviest users. To counterbalance this, many more schemes like the ones above could be designed and implemented at the time of expanding IT infrastructure, to get the most out of the energy used to run it, and it would be nice to see incentives in place to reward organisations with the foresight to do this.

Misinformation

AI language models make disinformation campaigns much easier, reducing the cost and effort required to create and deliver content. AI also has a history of creating inaccurate content. About 9 months ago, StackOverflow, a website used by developers, issued a temporary ban on posting ChatGPT content to the site due to the inaccuracy of the coding answers the AI generator produced, as this kind of activity constituted a violation of their community standards.

Disinformation campaigns could also be used to attempt to influence the outcomes of democratic elections, for example, with the barriers to creating plausible content disappearing and the production of social media bots posing as real voters becoming ever easier to do and harder to detect. But the biggest and arguably the most immediate concern raised around the uptake of AI technologies by business and industry has been around bias.

Bias in AI

It is questionable whether such a thing as neutral data really exists. And AI-powered machines, which learn from the data their human creators feed them with, inevitably replicate and even emphasise any biases contained within that data. In some cases, AI even automates the very bias types it was created to avoid. An Amazon recruitment system, for example, was fed data on the most suitable candidates for a specific role. However, since most of the previous successful candidates had been men, while learning how to judge which people were suitable for that role going forward, the computer famously became biased in favour of male candidates!

Other notorious examples of AI bias include an American programme for profiling criminals that wrongly identified black men as at more risk of reoffending than their white counterparts, and Microsoft’s chatbot Tay, which only needed 24 hours to start sharing discriminatory tweets based on interactions with its more Machiavellian users. The biases exhibited in these examples and the negative outcomes that they could have on human lives and livelihoods are clear proof that there is still a lot of work to do before AI can be trusted to make suitably nuanced judgements about individuals.

How to avoid bias in AI

Avoiding bias in AI is both a critical and ethical challenge. Bias can be introduced into AI systems unintentionally by biased training data, biased algorithms or biased decision-making processes. To minimise bias in AI, organisations could consider strategies such as ensuring that training data is representative, reprocessing and cleaning data to identify and remove biases, and always being mindful of how data is collected in the first place. Regularly auditing and testing AI systems for bias is another way to avoid replicating it. There are tools and methods for identifying bias in AI models that you can use to assist with this, such as disparate impact analyses. Algorithmic fairness can also be implemented with the specific aim of reducing bias, and human review and oversight is always likely to play a vital part in any AI decision-making process.

How is AI regulated?

This brings us back to the subject of this article and an important element of a much wider debate around AI – what are our responsibilities as individuals and organisations enjoying the benefits provided by AI technologies, and how can we ensure that our employment of these technologies and the data they generate and use does not harm other individuals and organisations?

The regulation of AI is a complex and rapidly evolving landscape. It can vary significantly from one country to another and encompasses a wide range of aspects from data privacy and ethics to safety and liability. Many countries have data privacy and protection laws that apply to AI systems. For example, in the UK, the General Data Protection Regulation (GDPR) sets strict requirements for the use and processing of personal data, including that used by AI systems. Compliance with these regulations is crucial when developing and deploying AI applications. AI systems are also subject to safety and security regulations in some industries, such as autonomous vehicles and healthcare, and AI systems used in HR are now regulated to prevent discrimination against protected groups.

Conclusion

While the outlook is positive, determining liability for AI-related incidents remains complex and more safeguards are needed to protect the public. Legislation is still in progress to clarify the liability of AI developers and users in the case of accidents or errors caused by AI systems. Human and consumer rights groups such as the UK’s Big Brother Watch are continually identifying ways in which AI and the data it captures negatively affects or discriminates against people, identifying areas for improvement and action and in many cases launching high-profile campaigns against companies, government departments and other entities to achieve procedural and legislative change to protect the public. We can’t be far away from seeing the establishment of government and industry bodies and standards to ensure the responsible development and use of AI technologies and setting much-needed requirements and best practices.

It is increasingly likely in the future that as a fundamental element of thoughtful branding and any forward-thinking CSR strategy, organisations and brands at the forefront of the AI expansion should lead the identification and mitigation of actual and possible harms caused by AI technologies as they become apparent, ensuring they both contribute to and stay safely within emergent regulatory frameworks.

Maintaining ethical AI practices requires a holistic approach that encompasses both technical and organisational aspects. It should be seen as an ongoing commitment and integral to your organisational culture and operations. By prioritising ethics in AI, businesses can build trust, foster innovation and make a positive contribution to society.

Request the visual PDF of the Al Beyond benefits our ethical responsibilities in its utilisation long article here.