A reference page for the questions that come up most often about Pretext (@chenglou/pretext), the pure-JavaScript text layout library. Each question links to a deeper guide where one exists. If your question isn't here, the Pretext library overview and the API reference cover most of what's not.
Pretext is a pure-JavaScript text layout engine that computes how text will wrap, how tall it will be, and where each line begins and ends — all without touching the DOM. It's framework-agnostic, ~15KB minified, has zero runtime dependencies, and is roughly 100–500x faster than getBoundingClientRect()-based DOM measurement. It was created by Cheng Lou (a React core alumnus and Midjourney engineer) and is published on npm as @chenglou/pretext. See the Pretext library overview for a full introduction.
Pretext uses a two-phase pipeline. The first phase, prepare(text, font), runs once per (text, font) combination: it tokenizes the text using Unicode segmentation rules, identifies legal break opportunities, measures glyph widths via canvas's measureText(), and caches the result. The second phase, layout(prepared, width), is pure arithmetic over the cached data — adding token widths, finding the best break per line, returning total height and line count. Because the expensive work is in prepare() and the cheap work is in layout(), you can re-layout the same text at different widths in microseconds. The full deep dive is in How Pretext Works.
Yes — typically 100x to 500x faster, depending on text length and how many measurements you need. The DOM's getBoundingClientRect() for a single text node costs ~1ms because it triggers a forced synchronous layout. Pretext's layout() for an already-prepared string costs microseconds. The benefit compounds when you measure many strings: 1000 messages × 1ms = 1 second of jank with the DOM, vs. 1000 × 5µs = 5 milliseconds with Pretext. The reproducible benchmark code is on the benchmarks page.
Pretext uses canvas's measureText() for glyph-width measurement during prepare(), but it doesn't render to canvas — the output of Pretext is just numbers (heights, line counts, character indices). You can then render those numbers however you like: with the DOM, with canvas, with WebGL, with SVG, or in a Node-side rendering pipeline. The "canvas dependency" is only about getting accurate font measurements; the layout computation itself is pure arithmetic.
Roughly 15KB minified, about 5KB gzipped, for the full library. It tree-shakes cleanly via its exports map — if you only import prepare and layout, your bundle includes only what's needed for those two functions, well under 10KB. There are no transitive dependencies because Pretext has no runtime dependencies. See the npm package page for the exact bundle-impact numbers.
No. Zero runtime dependencies, zero peer dependencies, zero optional dependencies. This is intentional — Pretext is designed to be a small, focused engine you can drop into any environment without worrying about transitive dependency conflicts. The package.json dependencies field is empty. See the npm details for more.
Modern browsers with Intl.Segmenter support: Chrome/Edge 74+, Safari 14.1+, Firefox 125+. For older browsers, polyfill Intl.Segmenter using a package like intl-segmenter-polyfill. Pretext also runs in modern Node.js (16+), but you'll need a server-side canvas implementation like @napi-rs/canvas for prepare() to work in Node — the standard Node runtime has no font engine.
Yes. There's no React adapter in the official package — Pretext is intentionally framework-agnostic — but the integration is a small custom hook. The recommended pattern is useTextLayout(text, font, width) which memoizes prepare() on (text, font) and layout() on (prepared, width). Full React integration including SSR, virtual scrollers, and Next.js patterns is in the Pretext + React guide.
Yes. Pretext is written in TypeScript and ships its own type definitions in the package — no separate @types/pretext install required. The types work with strict: true and all the strict-mode flags including noUncheckedIndexedAccess and exactOptionalPropertyTypes. See the Pretext + TypeScript guide for type signatures, generic wrappers, and patterns.
Yes — Pretext is framework-agnostic. It has no peer dependencies on any framework. The integration pattern is the same as with React: cache the prepare() result, call layout() to get dimensions, set the resulting width/height as styles on your container. The exact API for caching and reactivity will differ by framework (Vue uses computed, Svelte uses derived stores, etc.) but the call sites are identical.
Not currently. React Native doesn't have a canvas API, which Pretext uses for glyph-width measurement. There's been early discussion in the project's issues about a React Native adapter that would use platform-native text measurement APIs as the measurement source while keeping the pure-JavaScript layout phase, but nothing has shipped at the time of writing. For DOM-free text measurement in React Native today, the closest equivalent is react-native-text-size.
Yes, with one setup step: install @napi-rs/canvas (a Node-side canvas implementation with system font access). Once installed, Pretext's prepare() works in Node, and you can compute layouts during SSR. The benefit is that the server-rendered HTML ships with correct text dimensions, eliminating Cumulative Layout Shift (CLS). The pattern is detailed in the Pretext + React guide under "SSR and Hydration."
Yes. Pretext uses Intl.Segmenter for Unicode-correct segmentation across all major scripts, including Chinese, Japanese, and Korean. CJK text is segmented at the character level (not word level) and the locale-sensitive break rules (Japanese kinsoku rules, for example) are respected when you call setLocale('ja-JP') or similar. Pretext supports 12+ writing systems out of the box.
Pretext computes break opportunities for Arabic, Hebrew, and other RTL scripts correctly. The actual visual rendering of RTL text — character ordering, mirroring, bidirectional resolution — is still done by the browser's text engine when you render the text into the DOM (or by canvas's RTL rendering when you draw to canvas). Pretext gives you the line break positions; the renderer handles the visual ordering.
Yes. Emoji segments (including multi-codepoint emoji, ZWJ sequences like family or skin-tone modifiers, and flag emoji) are identified by the segmenter and treated as single units that can't be broken in the middle. The widths are measured via canvas's measureText(), which (in modern browsers) uses the font's actual emoji rendering for accurate widths.
Pretext doesn't replace CSS — it complements it. Pretext measures text and computes break positions; CSS still renders the actual text. The pattern: use Pretext during render to compute the right width and height for your container, then let CSS draw the text inside. The full comparison with concrete examples (line-clamp, fit-to-box, virtual scrolling) is in Pretext vs CSS.
Yes, more accurately than CSS's -webkit-line-clamp. The Pretext approach gives you the truncated string itself (CSS only changes the visual rendering, leaving the DOM unchanged). Use layoutWithLines, find the end index of line N, and slice the original string. The pattern is in Pretext Examples #6.
Yes — AI apps are one of the largest categories of Pretext usage. The combination of streaming token rendering, variable-length messages, and long virtualized conversation history is a near-perfect fit for the two-phase prepare/layout model. The patterns for streaming, sticky-scroll, and virtualized chat are detailed in Pretext for AI apps.
Pretext was created by Cheng Lou, a React core alumnus (he was a member of the React core team during 2014–2017), the creator of react-motion, and an early ReScript/Reason contributor. After React, he spent several years at Midjourney working on rendering and UI infrastructure. The full background is on the Cheng Lou + Pretext page.
MIT. Free for personal and commercial use, including in proprietary products. The license file is at the GitHub repository.
npm install @chenglou/pretext
Or yarn add, pnpm add, or bun add. The package is on the public npm registry under the @chenglou scope. Full install details, version pinning strategy, and verification steps are on the npm Pretext page.
Yes — as @chenglou/pretext. There are several other unrelated packages whose names contain "pretext"; the canonical one is the @chenglou-scoped package. The npm package page has install commands, version history, and the README.
The 18 community demos at pretext.cool cover everything from simple "measure text height" to drag-sprite reflow at 60fps, virtualized chat with 50K messages, and magazine-style multi-column layouts. Each demo links to its source code. The Pretext demos page groups them by category.
The project is on GitHub at github.com/chenglou/pretext. Open an issue with a minimal reproduction. The maintainer is responsive to high-quality bug reports and considers feature requests, though the bar for new features is high — Pretext is intentionally small.
Yes, contributions are welcome but the bar is high because the API is intentionally minimal. Bug fixes with tests are the easiest contributions to land. New features need discussion in an issue first. New demos for pretext.cool are accepted at github.com/chenglou/pretext.cool and have a much lower bar — see the demos page for the contribution guide.
@chenglou?Because the unscoped pretext name on npm is taken by an unrelated package. The @chenglou scope makes ownership and provenance clear: this is the package by Cheng Lou. Other "pretext"-named packages on npm are unrelated to this library — see the GitHub Pretext page for the disambiguation list.
No. PreTeXt with capital T (also written pretextbook.org) is a separate, unrelated project — an XML-based authoring system for academic textbooks, particularly mathematics. The naming overlap is unfortunate. This Pretext (lowercase, @chenglou/pretext) is a JavaScript text layout library and shares no code, authors, or organizational connection with PreTeXt the textbook system.
The roadmap is intentionally modest. Active areas include better RTL handling, more Unicode segmentation edge cases, and incremental performance improvements. The maintainer has been explicit that the API surface is meant to stay small — feature requests that would expand the API are discussed carefully and most are politely declined. Watch the GitHub releases for the actual release stream.
pretext.cool is a community-maintained showcase, not affiliated with Cheng Lou or the official Pretext project.