Are AI note-takers putting your privacy at risk?
When the COVID-19 pandemic hit, much of our normal life was disrupted. In an instant, we stepped six feet apart from each other and endured many days of self-imposed isolation to protect our loved ones. It wasn’t just our personal lives that were affected — the landscape in the workplace in Jamaica also shifted.
Online platforms like Zoom, Google Meet, and Microsoft Teams then offered a virtual alternative for us to connect with colleagues, friends, and even family members. After the pandemic, virtual meetings transitioned from a temporary fix to a staple in our daily lives…allowing us to attend birthday parties, classes, and even workout sessions with personal trainers.
With the advent of new technology, related tools and services have also emerged. Artificial intelligence (AI) note-taking apps are one such innovation. The brilliance of these applications lies in their simplicity. An AI note-taking app may be invited to a virtual meeting, quietly sit in the corner, and record the entire conversation. At the end, it generates a summary of what was discussed and automatically shares the notes with participants. These tools are often free, easy to use, and more efficient at taking notes than many of us humans. Like many AI tools, these note-taking apps offer the opportunity to save time, improve efficiency, and boost productivity.
It has been over a year since the official end of the COVID-19 pandemic, and while many aspects of life have returned to pre-pandemic norms, some changes have endured. But what is the cost of this level of efficiency, and have we been too eager to accept the exchange rate being offered by artificial intelligence. Have we truly considered how much of our privacy is at risk? Have we stopped to examine the measures providers are taking to safeguard sensitive company and customer information from being hacked, exposed, or used against us in ransom attacks? Are these platforms compliant with Jamaica’s new Data Protection Act or international regulations like the European Union’s General Data Protection Regulation?
As private individuals using these platforms to chat with family, these questions may be of little concern. However, for organisations, these are important questions that should be considered and swiftly addressed. I am not against AI note-taking tools. In fact, quite the opposite. I am a huge advocate for incorporating AI in our personal and professional lives. However, as with any technology, we should recognise their power and seek to use them responsibly.
Responsible use should include two key considerations. First, organisations need to establish strong security and privacy standards to ensure that AI note-taking tools comply with pre-agreed measures that protect sensitive information related to the organisation, its people, and its stakeholders. At a minimum, these tools should offer key features such as end-to-end encryption and robust data protection policies.
It is also important to understand where recordings are stored, whether locally or in the cloud, and who has access to them. Additionally, organisations should verify that the tool complies with the new Data Protection Act and other relevant laws and industry standards.
Finally, it is essential to select tools that are transparent about how they handle data, including voice recordings and transcripts. I recognise that these features do not guarantee absolute security, but I believe that they will significantly reduce the risk of data breaches or misuse.
The second key consideration is that organisations should set clear guidelines on when and how AI note-takers can be used in meetings. Decide who has the authority to initiate them, and whether individual participants are allowed to invite their own tools which may or may not meet the standards established. It is important to determine and communicate whether AI note-takers will be used in meetings involving sensitive information or important decisions relating to company or clients. Not all meetings should be attended by an AI note-taking tool.
It’s not just the content of your meetings that’s at risk. Many AI note-takers offer features that allow them to automatically join all of your virtual meetings, which requires access to your calendar and/or e-mail account. Once granted access, they typically scan your calendar for meetings with virtual meeting URLs to facilitate this. This action alone has the potential to expose information contained in your calendar that is not related to any meetings notes being processed.
While we’ve been giving applications access to our devices, e-mail, and calendars for a while, the primary concern isn’t the access itself, but whether the tool is responsibly managing your privacy. Therefore, it’s essential for us humans to ensure these tools handle the access they’ve been given with the utmost care and that the companies that produce these tools act as responsible custodians of your sensitive information.
Artificial intelligence offers tremendous benefits for companies and individuals alike, we must remain aware of the potential risks and carefully consider what we’re sacrificing for the sake of convenience and efficiency. This does not mean reverting to the days of pen and paper, but rather using AI responsibly by thoroughly vetting these tools and establishing protocols that align with our real-world privacy goals. By making informed decisions and prioritising privacy, we can leverage the advantages of AI note-takers without compromising our sensitive information.
Marc Frankson is lead consultant at Transcend AI Consulting. He can be contacted at grow@transcendwithai.com
