In this article, I describe how to build a simple language detection application natively using Chrome’s built-in language detection API and Angular. First, the Angular application calls an API to create a language detector and uses it to detect the language code of the input text. It then calls a helper function to map language codes to language names.
The cost is zero as the application does not require LLM from any provider.
It’s a happy path when users use Chrome Dev or Chrome Canary. If the user is using a non-Chrome or older Chrome browser, there should be a fallback implementation, such as calling Gemma or Gemini on Vertex AI to return the language name of the text.
Install Gemini Nano on Chrome
Update Chrome Dev/Canary to the latest version. As of this writing, the latest version of Chrome Canary is 133.
Please refer to this section to sign up for the early preview program of Chrome’s built-in AI.
https://developer.chrome.com/docs/ai/built-in#get_an_early_preview
Please refer to this section to enable Gemini Nano on Chrome and download the model. https://developer.chrome.com/docs/ai/get-started#use_apis_on_localhost
Install the language detection API on Chrome
- Go to chrome://flags/#language-detection-api.
- Select Enable.
- Click Restart or restart Chrome.
Scaffolding Angular applications
ng new language-detector-demo
Install dependencies
npm i -save-exact -save-dev @types/dom-chromium-ai
This dependency provides TypeScript types for all Chrome’s built-in APIs. Therefore, developers can use TypeScript to write elegant code to build AI applications.
In main.ts, add a reference tag to point to the package’s type definition file.
// main.ts
///
boot language detector
import { InjectionToken } from '@angular/core';
export const AI_LANGUAGE_DETECTION_API_TOKEN = new InjectionToken<AILanguageDetectorFactory | undefined>('AI_LANGUAGE_DETECTION_API_TOKEN');
import { AI_LANGUAGE_DETECTION_API_TOKEN } from '../constants/core.constant';
export function provideAILanguageDetectionAPI(): EnvironmentProviders {
return makeEnvironmentProviders([
{
provide: AI_LANGUAGE_DETECTION_API_TOKEN,
useFactory: () => {
const platformId = inject(PLATFORM_ID);
const objWindow = isPlatformBrowser(platformId) ? window : undefined;
return objWindow?.ai?.languageDetector ? objWindow?.ai?.languageDetector : undefined;
},
}
]);
}
I defined an environment provider to return languageDetector
In the example window.ai
namespace. When code is injected AI_LANGUAGE_DETECTION_API_TOKEN
After the token, they can access the language detection API and call its methods to detect the language code.
// app.config.ts
export const appConfig: ApplicationConfig = {
providers: [
provideAILanguageDetectionAPI()
]
};
In the application configuration, provideAILanguageDetectorAPI
is imported into the providers array.
Verify browser version and API availability
Chrome’s built-in AI is experimental, but the language detection API is supported in Chrome version 129 and above. Therefore, I implemented validation logic to ensure that the API is available before displaying the UI so the user can enter text.
Validation rules include:
- The browser is Chrome
- Browser version is at least 129
- The ai object is located in the window namespace
- The status of the language detection API is clear at a glance
export async function checkChromeBuiltInAI(): Promise<string> {
if (!isChromeBrowser()) {
throw new Error(ERROR_CODES.UNSUPPORTED_BROWSER);
}
if (getChromVersion() < CHROME_VERSION) {
throw new Error(ERROR_CODES.OLD_BROSWER);
}
const apiName = 'Language Detection API';
if (!('ai' in globalThis)) {
throw new Error(ERROR_CODES.NO_API);
}
const languageDetector = inject(AI_LANGUAGE_DETECTION_API_TOKEN);
const status = (await languageDetector?.capabilities())?.available;
if (!status) {
throw new Error(ERROR_CODES.NO_API);
} else if (status === 'after-download') {
throw new Error(ERROR_CODES.AFTER_DOWNLOAD);
} else if (status === 'no') {
throw new Error(ERROR_CODES.NO_GEMINI_NANO);
}
return '';
}
this checkChromeBuiltInAI
Function ensures that the language detection API is defined and available for use. If the check fails, the function throws an error. Otherwise, it returns an empty string.
export function isLanguageDetectionAPISupported(): Observable<string> {
return from(checkChromeBuiltInAI()).pipe(
catchError(
(e) => {
console.error(e);
return of(e instanceof Error ? e.message : 'unknown');
}
)
);
}
this isLanguageDetectionAPISupported
The function captures errors and returns an Observable of error messages.
Show AI components
@Component({
selector: 'app-detect-ai',
imports: [LanguageDetectionComponent],
template: `
@let error = hasCapability();
@if (!error) {
} @else if (error !== 'unknown') {
{{ error }}
}
`
})
export class DetectAIComponent {
hasCapability = toSignal(isLanguageDetectionAPISupported(), { initialValue: '' });
}
this DetectAIComponent
present LanguageDetectionComponent
There is no room for error. Otherwise, it will display an error message in the error signal.
@Component({
selector: 'app-language-detection',
imports: [FormsModule, LanguageDetectionResultComponent],
template: `
Input text:
`
})
export class LanguageDetectionComponent {
service = inject(LanguageDetectionService);
inputText = signal('');
detectedLanguages = signal<LanguageDetectionWithNameResult[]>([]);
capabilities = this.service.capabilities;
detector = this.service.detector;
isDisableDetectLanguage = computed(() =>
this.capabilities()?.available !== 'readily'
|| !this.detector() || this.inputText().trim() === '');
async setup() {
await this.service.createDetector();
}
teardown() {
this.service.destroyDetector();
}
async detectLanguage(topNLanguages = 3) {
const results = await this.service.detect(this.inputText(), topNLanguages);
this.detectedLanguages.set(results);
}
}
this LanguageDetectionComponent
There are three buttons. The Create button calls the language detection service to create a language detector. The destroy button destroys the language detector to release resources. The Detect button calls the language detection API to detect the first three language codes.
export type LanguageDetectionWithNameResult = LanguageDetectionResult & {
name: string;
}
@Component({
selector: 'app-language-detection-result',
template: `
Response:
@for (language of detectedLanguages(); track language.detectedLanguage) {
Confidence: {{ language.confidence.toFixed(3) }},
Detected Language: {{ language.detectedLanguage }},
Detected Language Name: {{ language.name }}
}
`,
})
export class LanguageDetectionResultComponent {
detectedLanguages = input<LanguageDetectionWithNameResult[]>([]);
}
this LanguageDetectionResultComponent
Is a display element that displays confidence, language code and language name.
Added new service to wrap language detection API
@Injectable({
providedIn: 'root'
})
export class LanguageDetectionService implements OnDestroy {
#controller = new AbortController();
#languageDetectionAPI = inject(AI_LANGUAGE_DETECTION_API_TOKEN);
#detector = signal<AILanguageDetector| undefined>(undefined);
detector = this.#detector.asReadonly();
#capabilities = signal<AILanguageDetectorCapabilities | null>(null);
capabilities = this.#capabilities.asReadonly();
async detect(query: string, topNResults = 3): Promise<LanguageDetectionWithNameResult[]> {
…
}
destroyDetector() {
…
}
private resetDetector() {
const detector = this.detector();
if (detector) {
detector.destroy();
console.log('Destroy the language detector.');
this.#detector.set(undefined);
}
}
async createDetector() {
…
}
ngOnDestroy() {
...
}
}
this LanguageDetectionService
The service encapsulates the logic of the language detection API. this createDetector
Methods set capability status and create detectors.
async createDetector() {
if (!this.#languageDetectionAPI) {
throw new Error(ERROR_CODES.NO_API);
}
this.resetDetector();
const [capabilities, newDetector] = await Promise.all([
this.#languageDetectionAPI.capabilities(),
this.#languageDetectionAPI.create({ signal: this.#controller.signal })
]);
this.#capabilities.set(capabilities);
this.#detector.set(newDetector);
}
this destroyDetector
Method clears the capability state and destroys the detector.
destroyDetector() {
this.#capabilities.set(null);
this.resetDetector();
}
this detect
The method accepts text and calls the API to return the results in descending order of confidence. This method returns the first three results and their language names.
async detect(query: string, topNResults = 3) {
if (!this.#languageDetectionAPI) {
throw new Error(ERROR_CODES.NO_API);
}
const detector = this.detector();
if (!detector) {
throw new Error('Failed to create the LanguageDetector.');
}
const minTopNReesults = Math.min(topNResults, MAX_LANGUAGE_RESULTS);
const results = await detector.detect(query);
const probablyLanguages = results.slice(0, minTopNReesults);
return probablyLanguages.map((item) => ({ ...item, name: this.languageTagToHumanReadable(item.detectedLanguage) }))
}
When service is disrupted, ngOnDestroy
Call lifecycle methods to destroy the language detector to free up memory.
ngOnDestroy(): void {
const detector = this.detector();
if (detector) {
detector.destroy();
}
}
All in all, software engineers can create Web AI applications without setting up backend servers or racking up the costs of cloud LLM.
resource: