- 20 Jun 2023
- 2 Minutes to read
Data Security & AI Language Models
- Updated on 20 Jun 2023
- 2 Minutes to read
As a steward of our customer's data, Dialpad rigorously reviews the systems and services of the third parties that process data on our behalf (known as "subprocessors").
Large language models (LLMs) like OpenAI represent an emerging category of subprocessor, which necessitates both applying existing security review criteria and developing new ones to address the unique technology and risks associated with LLMs.
Dialpad will continuously refine its security and privacy assessments to include these innovative partners and the capabilities they bring to the table. When evaluating an LLM provider, Dialpad examines the following security, privacy, and compliance aspects:
- Security practices review
- Contractual protections
- Customer data rights
- Regulatory Compliance
Let's go over each of these in more detail.
Security Practices Review
Each subprocessor is subject to a comprehensive security evaluation to obtain documented evidence of its commitment to implementing robust measures for safeguarding its systems and protecting customer data.
Dialpad looks for vendors to demonstrate mature information security practices at least as high as Dialpad’s own program, such as SOC2, ISO 27001, ISO 27017, ISO 27018, third-party auditing, penetration testing, and responsiveness to industry security events.
Before transferring customer data to any subprocessor, Dialpad requires contractual protections in place permitting and placing explicit limits on the processing carried out on that data. A Data Processing Addendum (DPA) provides Dialpad and its customers enforceable assurances, including that the appropriate security practices will remain in place, that customer data will be processed only for specified purposes, and that the subprocessor will enforce the same level of rigor on any partners it relies on.
Where available, Dialpad also enters into a Business Associate Agreement (BAA) to provide access for customers that may work with healthcare data such as protected health information (PHI).
Customer Data Rights
Your data is always your property.
As training data plays a crucial role in LLMs, it is essential for customers and partners to have a clear understanding of the allowable uses of customer data.
Dialpad requires prospective LLM partners to agree that they will not use Dialpad customer data for training purposes without proper notice and consent. We also require customer data to be deleted promptly after processing is complete, typically in 30 days or less.
On our subprocessors page, we share who we work with, what services they provide, and how to learn more.
We are proud of the quality of the partners we choose and the ecosystem that helps us provide a service that is reliable, scalable, innovative, and cost-effective.
Where LLMs get their training data and how they use it are questions under intense consideration by regulators in the US, EU, and elsewhere. We closely monitor these deliberations to align our service offerings to the evolving legal environment.
As a unified business communications platform built on AI, Dialpad has been thinking deeply about ethical practices in AI services for a long time. We hold ourselves to the standard that we want to build products and services that are part of a world we want to live in, and that means considering not just the principles of Security & Safety addressed above, but also aspects like Fairness & Inclusiveness, User-focused Benefit, and Accountability.
We look for these same principles in our prospective partners.
For more information on security and Dialpad's protocols, be sure to review the following articles.