[{"content":"Standard Schema has emerged as one of those elegant solutions that makes you wonder why we didn\u0026rsquo;t have it sooner. Released in Jan, it\u0026rsquo;s already making waves across the TypeScript ecosystem by providing a unified interface for schema validation libraries.\nThe Validation Tower of Babel Before diving into Standard Schema, we shall briefly recall the state of validation in the TypeScript world. We\u0026rsquo;ve had a proliferation of excellent libraries:\nZod with its robust type inference and composable API Valibot optimized for bundle size ArkType focusing on performance Yup as the veteran in the space io-ts with its functional approach Superstruct, TypeBox, and many others Each has its strengths, trade-offs, and passionate fan bases. But this diversity created a problem: tools and frameworks that accept schemas (like form libraries, API frameworks, or ORM query builders) had to either:\nPick one winner validation library to support Build adapters for multiple libraries Create their own validation system (adding to the fragmentation!) Standard Schema Standard Schema is a specification that defines a common interface for TypeScript schema validation libraries. At its core, Standard Schema is beautifully simple. It defines a StandardSchemaV1 interface that validation libraries can implement:\nThe gist:\nexport interface StandardSchemaV1\u0026lt;Input = unknown, Output = Input\u0026gt; { readonly \u0026#39;~standard\u0026#39;: { readonly version: 1; readonly vendor: string; readonly validate: (value: unknown) =\u0026gt; Result\u0026lt;Output\u0026gt; | Promise\u0026lt;Result\u0026lt;Output\u0026gt;\u0026gt;; }; } export type Result\u0026lt;T\u0026gt; = | { value: T; issues?: undefined } | { value?: undefined; issues: any[] }; The tilde prefix (~standard) is a clever touch that both avoids naming conflicts and deprioritizes these properties in IDE autocompletion.\nColin McDonnell, the creator of Zod, announced Standard Schema 1.0 as a proposal \u0026ldquo;for a standard interface to be adopted across TypeScript validation libraries.\u0026rdquo; The goal is elegant: make it easier for libraries to accept user-defined schemas as part of their API, regardless of which validation library created them.\nHow It Works In Practice Let\u0026rsquo;s say we\u0026rsquo;re building a form library. Instead of tying it to a specific validation library, we can accept any schema that implements the Standard Schema interface:\nfunction validateForm\u0026lt;T\u0026gt;( schema: { readonly \u0026#39;~standard\u0026#39;: { validate: (value: unknown) =\u0026gt; any } }, formData: unknown ): ValidationResult { const result = schema[\u0026#39;~standard\u0026#39;].validate(formData); // Process result consistently regardless of the schema library return transformToFormErrors(result); } Now users can bring their preferred validation library:\n// With Zod import { z } from \u0026#39;zod\u0026#39;; const zodSchema = z.object({ username: z.string().min(3), age: z.number().min(18), }); validateForm(zodSchema, formData); // With Valibot import * as v from \u0026#39;valibot\u0026#39;; const valibotSchema = v.object({ username: v.string([v.minLength(3)]), age: v.number([v.minValue(18)]), }); validateForm(valibotSchema, formData); The example work seamlessly because both Zod and Valibot have implemented the Standard Schema interface.\nThe Ecosystem Impact What has made Standard Schema so successful in such a short time is its rapid adoption by major libraries. As of now:\nZod has it built-in as of v3.22 Valibot added support in v0.25 ArkType implemented it natively Adapters exist for Yup, io-ts, and others This widespread adoption means that tools built on Standard Schema can immediately support a wide range of validation libraries. We\u0026rsquo;ve already seen this with:\nForm libraries like React Hook Form and Formik adding Standard Schema support API frameworks like tRPC and NestJS embracing the standard for input validation ORMs using it for query builders and input validation Testing tools leveraging it for property-based testing Empowering End-User Developers For end-user developers, Standard Schema offers several practical advantages:\n1. Freedom to Choose the Right Tool Different validation libraries excel in different areas. With Standard Schema, you can select libraries based on:\nBundle size: Choose Valibot for client-side code where every kilobyte matters Performance: Use ArkType where validation speed is critical Feature richness: Leverage Zod\u0026rsquo;s comprehensive API when needed Team familiarity: Use what your team already knows best All without being locked into a single vendor throughout your application.\n2. Simplified Experimentation Want to try out a new validation library? Before Standard Schema, this meant:\n// Before Standard Schema - Parallel implementations const zodSchema = z.object({ /* ... */ }); const valibotSchema = v.object({ /* ... */ }); const arkTypeSchema = t.object({ /* ... */ }); // Different validation calls for each library function validateWithZod(data) { /* ... */ } function validateWithValibot(data) { /* ... */ } function validateWithArkType(data) { /* ... */ } Now, you can simply:\n// Create schemas with different libraries const schema1 = z.object({ /* ... */ }); const schema2 = v.object({ /* ... */ }); const schema3 = t.object({ /* ... */ }); // One validation function for all function validate(schema, data) { return schema[\u0026#39;~standard\u0026#39;].validate(data); } This drastically reduces the cost of trying new tools and approaches.\n3. Future-Proofing Your Codebase Technologies evolve rapidly, and yesterday\u0026rsquo;s popular library might be tomorrow\u0026rsquo;s legacy code. By adopting Standard Schema, you\u0026rsquo;re building in an escape hatch for future migrations:\n// Your application code remains stable app.post(\u0026#39;/users\u0026#39;, validateBody(userSchema), createUser); // The implementation of userSchema can change over time // From Zod in 2024: const userSchema = z.object({ /* ... */ }); // To whatever comes next in 2026: const userSchema = nextGenValidator.object({ /* ... */ }); Your core application logic remains unchanged even as validation libraries evolve.\nMoving Forward with Standard Schema Standard Schema represents a significant shift in how we approach validation in the TypeScript ecosystem. By focusing on interoperability rather than creating yet another validation library, it multiplies the value of existing tools and reduces fragmentation.\nIf you\u0026rsquo;re building libraries or frameworks that accept schemas, supporting Standard Schema means instantly supporting all compatible validation libraries with minimal code. The investment is small, but the value to your users is substantial.\nFor application developers, there\u0026rsquo;s virtually no downside. Use your preferred validation library that implements Standard Schema, and you\u0026rsquo;ll gain flexibility for future migrations while ensuring better integration with a growing number of tools and frameworks.\nThose creating validation libraries should consider implementing the interface. It\u0026rsquo;s often just a few lines of code to make your library instantly compatible with an expanding ecosystem of tools.\nWhat makes Standard Schema particularly promising is that it represents a more collaborative approach to solving common problems. For the TypeScript ecosystem that has sometimes struggled with fragmentation, it offers a glimpse of a more cohesive future, one built on cooperation rather than just competition.\nStandard Schema may be simple in implementation, but its impact on how we build TypeScript applications is potentially profound. And that\u0026rsquo;s something worth validating.\n","permalink":"/posts/2025-03-01-standard-schema-validation-babel-fish//posts/2025-03-01-standard-schema-validation-babel-fish/","summary":"\u003cp\u003eStandard Schema has emerged as one of those elegant solutions that makes you wonder why we didn\u0026rsquo;t have it sooner. Released in Jan, it\u0026rsquo;s already making waves across the TypeScript ecosystem by providing a unified interface for schema validation libraries.\u003c/p\u003e\n\u003ch2 id=\"the-validation-tower-of-babel\"\u003eThe Validation Tower of Babel\u003c/h2\u003e\n\u003cp\u003eBefore diving into Standard Schema, we shall briefly recall the state of validation in the TypeScript world. We\u0026rsquo;ve had a proliferation of excellent libraries:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eZod with its robust type inference and composable API\u003c/li\u003e\n\u003cli\u003eValibot optimized for bundle size\u003c/li\u003e\n\u003cli\u003eArkType focusing on performance\u003c/li\u003e\n\u003cli\u003eYup as the veteran in the space\u003c/li\u003e\n\u003cli\u003eio-ts with its functional approach\u003c/li\u003e\n\u003cli\u003eSuperstruct, TypeBox, and many others\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eEach has its strengths, trade-offs, and passionate fan bases. But this diversity created a problem: tools and frameworks that accept schemas (like form libraries, API frameworks, or ORM query builders) had to either:\u003c/p\u003e","title":"Standard Schema: The TS Validation Babel Fish"},{"content":"The upgraded version of attr() is exactly the kind of CSS feature that makes you want to immediately open CodePen and start trying dumb little experiments.\nFor years, attr() has mostly lived in the content: bucket:\n.tag::before { content: attr(data-label); } Useful, but narrow.\nThe interesting part now is that attributes can be read as typed values. So instead of only pulling strings into generated content, CSS can parse them as lengths, colors, numbers, integers, and more.\nThat is enough to make attr() feel newly relevant, even if the best use cases are still pretty small-scale.\nWhat Changed Now we can do things like this:\n.card { grid-column: span attr(data-span type(\u0026lt;integer\u0026gt;), 1); font-size: attr(data-size type(\u0026lt;length\u0026gt;), 1rem); border-color: attr(data-accent type(\u0026lt;color\u0026gt;), currentColor); } That is a much bigger deal than it sounds. It means a bit of data already sitting in the markup can become a real styling input without getting translated through inline styles or extra template logic first.\nA Few Experiments That Feel Immediately Fun 1. Small layout hints If you have a simple card grid and want a couple of items to span wider, this is the first pattern that feels surprisingly natural:\n\u0026lt;article class=\u0026#34;card\u0026#34; data-span=\u0026#34;2\u0026#34;\u0026gt; \u0026lt;h2\u0026gt;Release Notes\u0026lt;/h2\u0026gt; \u0026lt;/article\u0026gt; .card { grid-column: span 1; } @supports (x: attr(x type(*))) { .card[data-span] { grid-column: span attr(data-span type(\u0026lt;integer\u0026gt;), 1); } } It is small, readable, and kind of satisfying. The HTML carries the hint, and CSS decides what to do with it.\nSee the Pen Advanced attr() Grid Span by pioneerlike on CodePen. 2. Theme accents coming from content or CMS data This one is probably the most immediately useful to me. If content already has a color-ish value attached to it, CSS can finally consume it directly.\n\u0026lt;li class=\u0026#34;status-pill\u0026#34; data-accent=\u0026#34;oklch(62% 0.18 145)\u0026#34;\u0026gt; Healthy \u0026lt;/li\u0026gt; .status-pill { color: attr(data-accent type(\u0026lt;color\u0026gt;), currentColor); box-shadow: inset 0 0 0 1px attr(data-accent type(\u0026lt;color\u0026gt;), currentColor); } The nice part is that it still feels like CSS. You are not dumping a full style attribute into the markup, you are just handing CSS a typed input.\nSee the Pen Advanced attr() Accent Colors by pioneerlike on CodePen. 3. Typography tweaks for tightly-scoped content modules This also gets interesting for content modules or CMS-driven blocks where you want a little flexibility without inventing a bunch of special classes.\n\u0026lt;div class=\u0026#34;promo-title\u0026#34; data-size=\u0026#34;1.35rem\u0026#34;\u0026gt; Spring launch week \u0026lt;/div\u0026gt; .promo-title { font-size: attr(data-size type(\u0026lt;length\u0026gt;), 1.125rem); } I would not build an entire type system this way, but that is not really the point. The point is that some small configuration values can now stay where they already are instead of being copied somewhere else.\nThe Real Appeal What I like most here is that it makes HTML feel a little more expressive without requiring much ceremony.\nThere has always been a slightly awkward gap between \u0026ldquo;this element has a value attached to it\u0026rdquo; and \u0026ldquo;CSS can do something meaningful with that value.\u0026rdquo; Usually the answer was one of:\nadd another class use an inline style write a bit of JavaScript push more work into templates Advanced attr() does not replace any of those. It just gives us another option, and for certain small UI patterns it is a pretty elegant one.\nThe Current Vibe This is still very much a feature to experiment with, poke at, and keep in the mental toolbox for later. The support story is what it is, so for now the fun is mostly in discovery.\nBut I do think this is one of those features that will quietly lead to some nice patterns once it settles in. Not because it changes CSS architecture in some grand way, but because it removes a tiny bit of friction in places where the platform used to feel oddly rigid.\nThat is usually how the good CSS features land anyway. You play with them first, and only later realize they made a few annoying problems less annoying.\n","permalink":"/posts/2025-02-22-advanced-attr-css-pragmatic//posts/2025-02-22-advanced-attr-css-pragmatic/","summary":"\u003cp\u003eThe upgraded version of \u003ccode\u003eattr()\u003c/code\u003e is exactly the kind of CSS feature that makes you want to immediately open CodePen and start trying dumb little experiments.\u003c/p\u003e\n\u003cp\u003eFor years, \u003ccode\u003eattr()\u003c/code\u003e has mostly lived in the \u003ccode\u003econtent:\u003c/code\u003e bucket:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-css\" data-lang=\"css\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nc\"\u003etag\u003c/span\u003e\u003cspan class=\"p\"\u003e::\u003c/span\u003e\u003cspan class=\"nd\"\u003ebefore\u003c/span\u003e \u003cspan class=\"p\"\u003e{\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"k\"\u003econtent\u003c/span\u003e\u003cspan class=\"p\"\u003e:\u003c/span\u003e \u003cspan class=\"nb\"\u003eattr\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"n\"\u003edata\u003c/span\u003e\u003cspan class=\"o\"\u003e-\u003c/span\u003e\u003cspan class=\"n\"\u003elabel\u003c/span\u003e\u003cspan class=\"p\"\u003e);\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e}\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eUseful, but narrow.\u003c/p\u003e\n\u003cp\u003eThe interesting part now is that attributes can be read as typed values. So instead of only pulling strings into generated content, CSS can parse them as lengths, colors, numbers, integers, and more.\u003c/p\u003e","title":"Playing with Advanced attr() in CSS"},{"content":"Last June, I wrote \u0026ldquo;The Quiet Power of Server-Sent Events for Real-Time Apps\u0026rdquo; where I sang the praises of SSE as a lightweight alternative to WebSockets for many real-time scenarios.\nWhile the simplicity of SSE remains one of its greatest strengths, that simplicity can be deceptive. A naive implementation might work perfectly in development but crumble under the harsh realities of spotty connections, aggressive proxies, and browser quirks. Let\u0026rsquo;s dive into the techniques that transform fragile SSE connections into resilient data pipelines.\nSetting the Foundation with Proper Headers Every robust SSE implementation begins with proper HTTP headers. They\u0026rsquo;re critical instructions to browsers and intermediaries about how to handle your connection.\n// Express.js example app.get(\u0026#39;/events\u0026#39;, (req, res) =\u0026gt; { res.setHeader(\u0026#39;Content-Type\u0026#39;, \u0026#39;text/event-stream\u0026#39;); res.setHeader(\u0026#39;Cache-Control\u0026#39;, \u0026#39;no-cache\u0026#39;); res.setHeader(\u0026#39;Connection\u0026#39;, \u0026#39;keep-alive\u0026#39;); // Prevents Nginx from buffering the response res.setHeader(\u0026#39;X-Accel-Buffering\u0026#39;, \u0026#39;no\u0026#39;); // Additional cross-origin permissions if needed res.setHeader(\u0026#39;Access-Control-Allow-Origin\u0026#39;, \u0026#39;*\u0026#39;); // ... rest of your SSE implementation }); Or if you\u0026rsquo;re using Hono with Bun (which has gained significant traction over the past year):\nimport { Hono } from \u0026#39;hono\u0026#39; const app = new Hono() app.get(\u0026#39;/events\u0026#39;, (c) =\u0026gt; { return c.streamSSE(async (stream) =\u0026gt; { // streamSSE automatically sets the proper Content-Type header // The rest of your event stream logic goes here }) }) The Content-Type: text/event-stream header is absolutely mandatory, it\u0026rsquo;s what tells the browser to treat this as an SSE connection. But equally important is Cache-Control: no-cache to prevent any intermediary from caching your dynamic content, and Connection: keep-alive to maintain the connection over time.\nThat X-Accel-Buffering: no header is a lifesaver when running behind Nginx, which is notorious for buffering responses unless explicitly told not to.\nResilient Message Delivery with Event IDs The real magic of robust SSE implementations comes from proper use of event IDs. Each event you send should include a unique, sequential identifier:\nlet eventId = 0; function sendEvent(res, data, eventName = \u0026#39;message\u0026#39;) { res.write(`id: ${eventId}\\n`); res.write(`event: ${eventName}\\n`); res.write(`data: ${JSON.stringify(data)}\\n\\n`); eventId++; } When a client reconnects after a disconnection, browsers automatically send the ID of the last event they received in the Last-Event-ID header. This is your opportunity to resume the stream exactly where the client left off:\n// Server implementation with reconnection support app.get(\u0026#39;/events\u0026#39;, (req, res) =\u0026gt; { // Set headers as shown earlier const lastEventId = parseInt(req.headers[\u0026#39;last-event-id\u0026#39;] || \u0026#39;0\u0026#39;, 10); // Retrieve missed events const missedEvents = eventStore.getEventsSince(lastEventId); // Send missed events first missedEvents.forEach(event =\u0026gt; { sendEvent(res, event.data, event.type); }); // Continue with regular event stream // ... }); For this to work, you need some way to store recent events. A simple approach is using a fixed-size circular buffer:\nclass EventStore { constructor(capacity = 100) { this.events = []; this.capacity = capacity; } addEvent(event) { // Add event with auto-incrementing ID const id = this.events.length \u0026gt; 0 ? this.events[this.events.length - 1].id + 1 : 1; this.events.push({ ...event, id }); // Trim if we exceed capacity if (this.events.length \u0026gt; this.capacity) { this.events.shift(); } return id; } getEventsSince(id) { return this.events.filter(event =\u0026gt; event.id \u0026gt; id); } } const eventStore = new EventStore(); This approach ensures that clients don\u0026rsquo;t miss messages during brief disconnections. The fixed-size buffer prevents memory leaks while still accommodating reasonable reconnection windows.\nKeeping Connections Alive One of SSE\u0026rsquo;s most common failure points is intermediaries like proxies and load balancers silently closing seemingly idle connections. The solution? Regular heartbeats:\nfunction startEventStream(req, res) { // Set headers... // Set up heartbeat const heartbeatInterval = setInterval(() =\u0026gt; { res.write(\u0026#39;: keepalive\\n\\n\u0026#39;); }, 15000); // 15 seconds is a good balance // Clean up on client disconnect req.on(\u0026#39;close\u0026#39;, () =\u0026gt; { clearInterval(heartbeatInterval); }); // Rest of your SSE logic... } These comment lines (starting with :) are ignored by SSE clients but keep the connection active. Fifteen seconds is usually sufficient, but you may need to adjust based on your infrastructure. AWS Application Load Balancers, for instance, have a default idle timeout of 60 seconds, so a 30-second heartbeat would be appropriate there.\nGraceful Error Handling Both client and server need sophisticated error handling for truly robust SSE. On the server side, proper cleanup is critical:\napp.get(\u0026#39;/events\u0026#39;, (req, res) =\u0026gt; { const clientId = req.query.clientId; // Validate client authorization if (!isAuthorized(clientId)) { // Don\u0026#39;t use SSE response format for errors return res.status(403).json({ error: \u0026#39;Unauthorized\u0026#39; }); } // Set up connection const clientConnection = registerClient(clientId, res); // Clean up on client disconnect req.on(\u0026#39;close\u0026#39;, () =\u0026gt; { unregisterClient(clientId); }); // Handle server shutdown gracefully req.on(\u0026#39;end\u0026#39;, () =\u0026gt; { unregisterClient(clientId); }); }); On the client side, default browser reconnection behavior is often too simplistic. A more robust approach implements exponential backoff with jitter:\nclass RobustEventSource { constructor(url, options = {}) { this.url = url; this.options = options; this.eventSource = null; this.reconnectAttempt = 0; this.maxReconnectTime = options.maxReconnectTime || 30000; this.listeners = {}; this.connect(); } connect() { this.eventSource = new EventSource(this.url); this.eventSource.onopen = (event) =\u0026gt; { console.log(\u0026#39;Connection established\u0026#39;); this.reconnectAttempt = 0; }; this.eventSource.onerror = (event) =\u0026gt; { this.handleError(event); }; // Forward all events to listeners Object.entries(this.listeners).forEach(([type, handlers]) =\u0026gt; { handlers.forEach(handler =\u0026gt; { this.eventSource.addEventListener(type, handler); }); }); } handleError(event) { if (this.eventSource.readyState === EventSource.CLOSED) { // Connection closed, implement backoff with jitter const baseTime = Math.min( this.maxReconnectTime, 1000 * Math.pow(2, this.reconnectAttempt) ); // Add jitter (±30% of baseTime) const jitter = baseTime * 0.3 * (Math.random() * 2 - 1); const reconnectTime = baseTime + jitter; console.log(`Connection lost. Reconnecting in ${Math.round(reconnectTime / 1000)}s`); setTimeout(() =\u0026gt; { this.reconnectAttempt++; this.connect(); }, reconnectTime); } } addEventListener(type, handler) { if (!this.listeners[type]) { this.listeners[type] = []; } this.listeners[type].push(handler); if (this.eventSource) { this.eventSource.addEventListener(type, handler); } } close() { if (this.eventSource) { this.eventSource.close(); this.eventSource = null; } } } // Usage const events = new RobustEventSource(\u0026#39;/events\u0026#39;, { maxReconnectTime: 60000 }); events.addEventListener(\u0026#39;message\u0026#39;, (event) =\u0026gt; { console.log(\u0026#39;Received:\u0026#39;, JSON.parse(event.data)); }); events.addEventListener(\u0026#39;special\u0026#39;, (event) =\u0026gt; { console.log(\u0026#39;Special event:\u0026#39;, JSON.parse(event.data)); }); This implementation goes beyond the browser\u0026rsquo;s native reconnection by adding exponential backoff with jitte. It\u0026rsquo;s a strategy that prevents thundering herds when services recover from outages. The jitter ensures that multiple clients don\u0026rsquo;t all attempt to reconnect simultaneously.\nScaling Considerations Browser connection limits can become a bottleneck as your application scales. Most browsers limit connections per domain to around 6-8 connections. With HTTP/2 this is less of an issue, but if you\u0026rsquo;re supporting older clients or if HTTP/2 isn\u0026rsquo;t an option, consider multiplexing multiple logical streams over a single SSE connection:\n// Server: Multiplex multiple event types function broadcastToUser(userId, eventType, data) { const userConnection = activeConnections[userId]; if (userConnection) { sendEvent(userConnection, { type: eventType, payload: data }, \u0026#39;multiplex\u0026#39;); } } // Client: Demultiplex events events.addEventListener(\u0026#39;multiplex\u0026#39;, (event) =\u0026gt; { const { type, payload } = JSON.parse(event.data); // Route to appropriate handler switch (type) { case \u0026#39;notification\u0026#39;: handleNotification(payload); break; case \u0026#39;chat\u0026#39;: handleChatMessage(payload); break; case \u0026#39;status\u0026#39;: handleStatusUpdate(payload); break; } }); This pattern lets you send notifications, chat messages, and status updates all over a single connection, dramatically reducing the connection overhead.\nA Real-World Integration Example Let\u0026rsquo;s put it all together with a practical example. Here\u0026rsquo;s how you might implement a robust SSE system for streaming AI-generated responses (a common use case these days):\n// Server (using Express and an AI client library) import express from \u0026#39;express\u0026#39;; import { AIClient } from \u0026#39;some-ai-library\u0026#39;; const app = express(); const aiClient = new AIClient({ apiKey: process.env.AI_API_KEY }); const eventStore = new EventStore(50); // Store last 50 events app.get(\u0026#39;/ai-stream/:conversationId\u0026#39;, async (req, res) =\u0026gt; { const { conversationId } = req.params; const lastEventId = parseInt(req.headers[\u0026#39;last-event-id\u0026#39;] || \u0026#39;0\u0026#39;, 10); // Set SSE headers res.setHeader(\u0026#39;Content-Type\u0026#39;, \u0026#39;text/event-stream\u0026#39;); res.setHeader(\u0026#39;Cache-Control\u0026#39;, \u0026#39;no-cache\u0026#39;); res.setHeader(\u0026#39;Connection\u0026#39;, \u0026#39;keep-alive\u0026#39;); res.setHeader(\u0026#39;X-Accel-Buffering\u0026#39;, \u0026#39;no\u0026#39;); // Heartbeat interval const heartbeat = setInterval(() =\u0026gt; { res.write(\u0026#39;: keepalive\\n\\n\u0026#39;); }, 15000); // Clean up on close req.on(\u0026#39;close\u0026#39;, () =\u0026gt; { clearInterval(heartbeat); }); try { // Send any missed events const missedEvents = eventStore.getEventsSince(lastEventId); for (const event of missedEvents) { sendEvent(res, event.data, event.type, event.id); } // Get the conversation context const conversation = await getConversation(conversationId); // Stream the AI response const stream = await aiClient.generateStreamingResponse(conversation.messages); let currentId = lastEventId; for await (const chunk of stream) { currentId = eventStore.addEvent({ type: \u0026#39;chunk\u0026#39;, data: { text: chunk.text } }); sendEvent(res, { text: chunk.text }, \u0026#39;chunk\u0026#39;, currentId); } // Mark completion currentId = eventStore.addEvent({ type: \u0026#39;done\u0026#39;, data: { conversationId } }); sendEvent(res, { conversationId }, \u0026#39;done\u0026#39;, currentId); } catch (error) { console.error(\u0026#39;Error streaming AI response:\u0026#39;, error); // Send error event const errorId = eventStore.addEvent({ type: \u0026#39;error\u0026#39;, data: { message: \u0026#39;Failed to generate response\u0026#39; } }); sendEvent(res, { message: \u0026#39;Failed to generate response\u0026#39; }, \u0026#39;error\u0026#39;, errorId); // End the stream clearInterval(heartbeat); res.end(); } }); function sendEvent(res, data, event, id) { res.write(`id: ${id}\\n`); res.write(`event: ${event}\\n`); res.write(`data: ${JSON.stringify(data)}\\n\\n`); } app.listen(3000, () =\u0026gt; { console.log(\u0026#39;Server running on port 3000\u0026#39;); }); And the corresponding client implementation:\nclass AIStreamClient { constructor(conversationId) { this.conversationId = conversationId; this.responseText = \u0026#39;\u0026#39;; this.eventSource = null; this.onChunkCallbacks = []; this.onCompleteCallbacks = []; this.onErrorCallbacks = []; } startStream() { this.responseText = \u0026#39;\u0026#39;; this.eventSource = new EventSource(`/ai-stream/${this.conversationId}`); this.eventSource.addEventListener(\u0026#39;chunk\u0026#39;, (event) =\u0026gt; { const { text } = JSON.parse(event.data); this.responseText += text; this.onChunkCallbacks.forEach(callback =\u0026gt; callback(text, this.responseText) ); }); this.eventSource.addEventListener(\u0026#39;done\u0026#39;, () =\u0026gt; { this.eventSource.close(); this.onCompleteCallbacks.forEach(callback =\u0026gt; callback(this.responseText) ); }); this.eventSource.addEventListener(\u0026#39;error\u0026#39;, (event) =\u0026gt; { const data = event.data ? JSON.parse(event.data) : { message: \u0026#39;Unknown error\u0026#39; }; this.onErrorCallbacks.forEach(callback =\u0026gt; callback(data.message) ); this.eventSource.close(); }); // Handle connection errors this.eventSource.onerror = () =\u0026gt; { // If we\u0026#39;re not already closed, try to reconnect // (browser will handle this automatically with Last-Event-ID) if (this.eventSource.readyState === EventSource.CLOSED) { console.log(\u0026#39;Connection closed, browser will attempt to reconnect...\u0026#39;); } }; } onChunk(callback) { this.onChunkCallbacks.push(callback); return this; } onComplete(callback) { this.onCompleteCallbacks.push(callback); return this; } onError(callback) { this.onErrorCallbacks.push(callback); return this; } stop() { if (this.eventSource) { this.eventSource.close(); this.eventSource = null; } } } // Usage const streamClient = new AIStreamClient(\u0026#39;conversation-123\u0026#39;) .onChunk((chunk, fullText) =\u0026gt; { document.getElementById(\u0026#39;response\u0026#39;).textContent = fullText; }) .onComplete((fullResponse) =\u0026gt; { console.log(\u0026#39;AI response complete:\u0026#39;, fullResponse); }) .onError((message) =\u0026gt; { console.error(\u0026#39;Stream error:\u0026#39;, message); }); streamClient.startStream(); This implementation handles all the patterns we\u0026rsquo;ve discussed: proper headers, event IDs for resuming interrupted streams, heartbeats to keep connections alive, and comprehensive error handling. It\u0026rsquo;s particularly well-suited for streaming AI-generated content, which has become one of the most popular use cases for SSE in recent months.\nFinal Thoughts Server-Sent Events remain one of the most underappreciated tools for real-time web functionality. Their simplicity compared to WebSockets is a genuine strength, but that simplicity must be paired with careful implementation to create truly robust systems.\nThe patterns covered here: proper headers, event IDs, heartbeats, error handling, and scaling considerations. They represent hard-earned lessons from real-world implementations. Apply them consistently, and you\u0026rsquo;ll build SSE-based systems that can withstand the unpredictable nature of production environments.\nSSE isn\u0026rsquo;t always the right choice. If you need bidirectional communication, WebSockets may still be your best bet. But for server-to-client streaming, a well-implemented SSE solution offers an elegant, standards-based approach with excellent browser support and minimal overhead.\n","permalink":"/posts/2025-01-30-advanced-patterns-sse//posts/2025-01-30-advanced-patterns-sse/","summary":"\u003cp\u003eLast June, I wrote \u003ca href=\"../2024-06-17-server-sent-events-real-time-apps\"\u003e\u0026ldquo;The Quiet Power of Server-Sent Events for Real-Time Apps\u0026rdquo;\u003c/a\u003e where I sang the praises of SSE as a lightweight alternative to WebSockets for many real-time scenarios.\u003c/p\u003e\n\u003cp\u003eWhile the simplicity of SSE remains one of its greatest strengths, that simplicity can be deceptive. A naive implementation might work perfectly in development but crumble under the harsh realities of spotty connections, aggressive proxies, and browser quirks. Let\u0026rsquo;s dive into the techniques that transform fragile SSE connections into resilient data pipelines.\u003c/p\u003e","title":"Advanced Patterns for Production-Ready SSE (continued)"},{"content":"Back in early 2020, I wrote a blog post titled \u0026ldquo;Svelte 3: The Compiler as Your Framework\u0026rdquo;. Like many developers at the time, I was blown away by Rich Harris\u0026rsquo;s compiler-centric approach that promised to solve the virtual DOM overhead of React while offering a delightfully simple developer experience. The \u0026ldquo;write less code\u0026rdquo; mantra resonated with me, and for a handful of smaller projects, Svelte proven to be beyond useful.\nBut then, as projects grew more complex and team considerations came into play, Svelte gradually slipped from my daily toolkit. The ecosystem wasn\u0026rsquo;t quite there yet. Component libraries were sparse compared to React\u0026rsquo;s thriving marketplace. Finding developers experienced with Svelte was challenging. The \u0026ldquo;real world\u0026rdquo; pushed me back toward React and occasionally Vue for client work.\nFast forward to today, and Svelte 5 has finally officially landed. When the Svelte team announced its release last month, my interest was immediately piqued.\nRunes: Reactivity Reimagined The most significant change in Svelte 5 is the introduction of \u0026ldquo;runes\u0026rdquo; — a new reactivity system that fundamentally changes how state management works. If you\u0026rsquo;re unfamiliar with the term, the Svelte team defines runes as \u0026ldquo;a letter or mark used as a mystical or magic symbol,\u0026rdquo; which feels appropriate given how they transform Svelte\u0026rsquo;s internals.\nIn Svelte 3 and 4, reactivity was largely implicit. When you updated a variable, Svelte would magically know to update the DOM. This approach was elegant but had limitations when dealing with more complex state management scenarios.\nThe new runes-based system makes reactivity explicit using special symbols like $: for derived values, $state() for reactive variables, and $effect() for side effects. Here\u0026rsquo;s a simple example:\nfunction TaskTracker() { let tasks = $state([ { id: 1, text: \u0026#39;Learn Svelte 5\u0026#39;, completed: false }, { id: 2, text: \u0026#39;Build a demo app\u0026#39;, completed: false } ]); // Derived state using runes let completedCount = $derived(tasks.filter(t =\u0026gt; t.completed).length); let pendingCount = $derived(tasks.length - completedCount); $effect(() =\u0026gt; { if (completedCount === tasks.length \u0026amp;\u0026amp; tasks.length \u0026gt; 0) { console.log(\u0026#39;All tasks completed! 🎉\u0026#39;); } }); return { addTask(text) { tasks = [...tasks, { id: Date.now(), text, completed: false }]; }, toggleTask(id) { tasks = tasks.map(task =\u0026gt; task.id === id ? { ...task, completed: !task.completed } : task ); }, get tasks() { return tasks; }, get progress() { return `${completedCount}/${tasks.length} completed`; } }; } This approach feels more explicit and composable than before. It reminds me somewhat of React\u0026rsquo;s hooks, but with Svelte\u0026rsquo;s trademark less-boilerplate approach. The main benefit is that these runes work consistently across both component and non-component code, making it easier to extract logic.\nI admit I initially approached runes with skepticism. Another syntax to learn? More magic? But my perspective completely shifted when I realized what they enable outside of .svelte files. For the first time, I can create reactive logic in standard JavaScript modules that can be imported into Svelte templates while fully encapsulating their reactivity.\nThis is revolutionary for testing. I can now write vitest tests against these external reactive components, something that was awkward or impossible before. As \u0026ldquo;Introducing runes\u0026rdquo; from the early days, explains: \u0026ldquo;Having code behave one way inside .svelte files and another inside .js can make it hard to refactor code\u0026hellip; With runes, reactivity extends beyond the boundaries of your .svelte files.\u0026rdquo;\n// shoppingCart.js - a fully testable reactive module export function createShoppingCart() { const items = $state([]); const isPromoApplied = $state(false); // Derived values const subtotal = $derived( items.reduce((sum, item) =\u0026gt; sum + (item.price * item.quantity), 0) ); const discount = $derived(isPromoApplied ? subtotal * 0.1 : 0); const total = $derived(subtotal - discount); function addItem(product, quantity = 1) { const existingItem = items.find(item =\u0026gt; item.id === product.id); if (existingItem) { items = items.map(item =\u0026gt; item.id === product.id ? {...item, quantity: item.quantity + quantity} : item ); } else { items = [...items, {...product, quantity}]; } } return { get items() { return items; }, get subtotal() { return subtotal; }, get total() { return total; }, addItem, applyPromo(valid) { isPromoApplied = valid; } }; } // And in your component: import { createShoppingCart } from \u0026#39;./shoppingCart.js\u0026#39;; function CheckoutComponent() { const cart = createShoppingCart(); // Use cart.items, cart.total and methods in your component } While many frontend developers rely primarily on TypeScript to guarantee correctness (and the Svelte community has had a complex relationship with TypeScript), I\u0026rsquo;ve always preferred the confidence that comes from robust testing. Svelte 5 finally bridges this gap in a way no other mainstream framework has managed.\nCrucially, this system remains a compile-time transformation — Svelte is still generating highly optimized vanilla JavaScript rather than shipping a runtime framework. The performance benefits that initially attracted me to Svelte 3 remain intact, but with a more flexible state management model.\nSnippets: Composition Without the Complexity Another game-changer is the new snippet feature, which provides a more intuitive way to build reusable UI elements:\n{#snippet todoItem(todo)} \u0026lt;div class=\u0026#34;flex items-center gap-2\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;checkbox\u0026#34; bind:checked={todo.completed} /\u0026gt; \u0026lt;span class:line-through={todo.completed}\u0026gt; {todo.title} \u0026lt;/span\u0026gt; \u0026lt;button on:click={() =\u0026gt; removeTodo(todo.id)}\u0026gt;×\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; {/snippet} \u0026lt;main\u0026gt; {#each todos as todo (todo.id)} {@render todoItem(todo)} {/each} \u0026lt;/main\u0026gt; This solves one of my long-standing frustrations with Svelte — the awkwardness of component composition compared to frameworks like React. Snippets provide a clean, declarative way to define and reuse template fragments without the overhead of creating full components.\nThe mental model is straightforward: define a snippet with parameters, then render it wherever needed with the {@render ...} syntax. No more slot spaghetti or prop drilling just to extract a small piece of UI logic.\nThe New CLI: A Unified Command Line Interface Perhaps one of the most practical quality-of-life improvements is the introduction of the new sv CLI that unifies the Svelte development experience. While Svelte has had various CLI tools in the past (including those offered through SvelteKit), the new dedicated CLI brings a more cohesive approach to development workflows.\nThe sv command serves as a single entrypoint for Svelte-related tasks, with various subcommands that handle different aspects of the development cycle:\n# Creating a new project npm create svelte@latest my-project # Using the sv CLI npx sv check # Type checking npx sv format # Formatting Svelte files npx sv build # Building for production What I particularly appreciate is how the CLI integrates with existing tooling while providing Svelte-specific optimizations. For instance, the build command produces highly optimized bundles tailored specifically for Svelte\u0026rsquo;s compiler output.\nFor those of us working with Svelte in production environments, having a standardized CLI that handles everything from type checking to builds represents a significant step forward in maturity for the ecosystem.\nMigration: Not as Painful as Expected One of my concerns about Svelte 5 was backward compatibility. The introduction of runes represents a significant shift in Svelte\u0026rsquo;s mental model, and I worried it might break existing codebases.\nThankfully, the Svelte team has been remarkably thoughtful about this transition. As sveltekit.io notes, \u0026ldquo;Starting a new project with Svelte 5\u0026hellip; they give you the option to opt into Svelte 5 features, including runes.\u0026rdquo; You can even scope runes on a component-by-component basis.\nIn my testing, migrating a small Svelte 4 application to Svelte 5 was relatively painless. The old reactive syntax still works, and you can gradually adopt runes as you update components. The comprehensive migration guide provides clear examples for each pattern that needs updating.\nThe Ecosystem: Growing Rapidly One of the reasons I originally drifted away from Svelte was its limited ecosystem compared to React. While it\u0026rsquo;s still not at React\u0026rsquo;s level (and may never need to be), the situation has improved dramatically.\nMajor UI libraries like TailwindCSS work seamlessly with Svelte 5. Component libraries such as Skeleton, Carbon, and Melt UI have either already updated for Svelte 5 or have release candidates available. The SvelteKit framework (Svelte\u0026rsquo;s answer to Next.js) has been updated to fully leverage Svelte 5\u0026rsquo;s capabilities.\nThe TypeScript support is now excellent, with proper typing for all the new runes and snippets features. This was another pain point in earlier versions that has been thoroughly addressed.\nSo, Where Does Svelte 5 Fit in My Toolkit? After spending time with Svelte 5, I\u0026rsquo;m thoroughly impressed, but I\u0026rsquo;m not quite ready to make it my default framework for all projects. The reality of client work often requires pragmatic choices based on team familiarity, long-term maintenance, and ecosystem maturity.\nThat said, Svelte 5 now has a definite place in my recommendations. For clients looking for exceptional performance on lightweight sites, rapid development cycles, or projects where bundle size matters (like content-heavy marketing sites or progressive web apps), Svelte is now my honest recommendation. As swyx.io noted way back in 2020, \u0026ldquo;Svelte for Sites, React for Apps\u0026rdquo; had a certain wisdom to it — but with Svelte 5\u0026rsquo;s improvements, that boundary is increasingly blurry.\nThe testability improvements are particularly compelling for projects that need to maintain high quality over time. When a client values clean code and maintainability as much as initial development speed, Svelte 5 offers a surprisingly mature solution.\nWhat truly stands out is how Svelte 5 manages to add power and flexibility without sacrificing its core simplicity. That\u0026rsquo;s a rare achievement in the framework space, where feature additions often come with corresponding increases in complexity.\nIf you, like me, were once enamored with Svelte but drifted away as other frameworks evolved, now might be the perfect time to take another look. This innovative framework has returned to the spotlight, and while it may not dethrone the incumbents for every use case, it\u0026rsquo;s carved out a compelling niche that\u0026rsquo;s increasingly hard to ignore.\n","permalink":"/posts/2024-11-16-svelte-5-comback-story//posts/2024-11-16-svelte-5-comback-story/","summary":"\u003cp\u003eBack in early 2020, I wrote a blog post titled \u003ca href=\"../2020-11-02-svelte-3-compiler-as-framework\"\u003e\u0026ldquo;Svelte 3: The Compiler as Your Framework\u0026rdquo;\u003c/a\u003e. Like many developers at the time, I was blown away by Rich Harris\u0026rsquo;s compiler-centric approach that promised to solve the virtual DOM overhead of React while offering a delightfully simple developer experience. The \u0026ldquo;write less code\u0026rdquo; mantra resonated with me, and for a handful of smaller projects, Svelte proven to be beyond useful.\u003c/p\u003e\n\u003cp\u003eBut then, as projects grew more complex and team considerations came into play, Svelte gradually slipped from my daily toolkit. The ecosystem wasn\u0026rsquo;t quite there yet. Component libraries were sparse compared to React\u0026rsquo;s thriving marketplace. Finding developers experienced with Svelte was challenging. The \u0026ldquo;real world\u0026rdquo; pushed me back toward React and occasionally Vue for client work.\u003c/p\u003e","title":"Svelte 5: The Comeback Story of a Revolutionary Framework"},{"content":"For web eng friends who are building TypeScript full-stack applications, chances are you\u0026rsquo;ve either used tRPC or had it enthusiastically recommended to you. And for very good reasons: tRPC has revolutionized how we build type-safe APIs, eliminating the schema definition/code generation dance that plagued earlier approaches.\nI\u0026rsquo;ve been a happy tRPC user since v9, and I\u0026rsquo;m not here to convince you to abandon it. The developer experience benefits are real. End-to-end type safety without code generation, excellent editor autocomplete, and runtime validation that just works. Plus, as the latest StackOverflow survey indicates, TypeScript developers continue to command higher salaries than their JavaScript counterparts.\nBut after adopting tRPC in several projects, from rapid MVPs to more complex applications, I\u0026rsquo;ve discovered there are legitimate scenarios where reaching for tRPC isn\u0026rsquo;t the slam-dunk decision that Twitter (or X, or whatever we\u0026rsquo;re calling it now) threads make it out to be.\nWhen tight coupling becomes a liability The magic of tRPC stems from the tight integration between your frontend and backend. Your client code directly imports types from your server procedures, creating a seamless developer experience. But this tight coupling can become problematic as your application grows.\nA few months ago, I was working on a project where the frontend and backend were maintained by different teams. What started as a monorepo with tRPC quickly became unwieldy when the teams needed to operate on different release cycles. Every minor change to a procedure signature required coordination between teams, and the frontend was constantly chasing the backend\u0026rsquo;s type definitions.\ntRPC could mitigate this with strategies like versioning procedures or maintaining a stable API layer within the monorepo, but these require additional discipline.\nIn this scenario, a more traditional REST API with OpenAPI specifications would have provided cleaner separation of concerns. The frontend team could have consumed a stable contract rather than being tightly bound to the backend implementation.\nThe monolith assumption tRPC works beautifully in the monorepo, monolithic application model. But the moment you need to expose your API to third-party consumers or split your backend into microservices with different technology stacks, things get complicated.\nOn a recent e-commerce platform rebuild, we started with tRPC for internal services. However, when we needed to expose a public API for partners, we faced a dilemma: our beautifully type-safe procedures weren\u0026rsquo;t easily consumable by non-TypeScript clients. We ended up maintaining two parallel API layers, one with tRPC for our own frontend and another REST API for partners. The duplication created maintenance headaches and inconsistencies.\nThe integration complexity tax When you need to integrate with existing systems or third-party services that don\u0026rsquo;t speak tRPC, you often end up with a patchwork of approaches.\nLast quarter, I worked on a project that needed to communicate with a legacy Java backend, a GraphQL service, and a newer Node.js microservice. Using tRPC for just the parts we controlled meant constantly translating between different API paradigms. We ended up abandoning tRPC in favor of a more universal approach with REST and OpenAPI, which allowed for more consistent patterns across all integrations.\nScale considerations While tRPC\u0026rsquo;s performance is generally excellent, there are egde cases where its default behavior isn\u0026rsquo;t optimal. When working on a data-intensive application with hundreds of concurrent users performing complex operations, we discovered that tRPC\u0026rsquo;s default batching and caching mechanisms required significant customization.\nIn comparison, mature REST frameworks had more out-of-the-box solutions for these performance challenges. The trade-off between developer experience and performance optimization became apparent when we started hitting scale.\nThe learning curve reality Despite its relatively straightforward API, tRPC still represents a new paradigm for many developers. On teams with varying levels of TypeScript experience, the learning curve can be steeper than anticipated.\nA few months bakc, I worked on a project with a team that had extensive REST API experience but limited TypeScript exposure. What I thought would be a productivity boost turned into a source of friction, with team members struggling to understand the error messages and type inference nuances. The developer experience benefits of tRPC are significantly diminished when half your team is fighting with the tooling rather than leveraging it.\nPackage ecosystem maturity While the tRPC core is robust, the surrounding ecosystem for common needs like authentication, caching, and monitoring isn\u0026rsquo;t as mature as what you\u0026rsquo;ll find in established REST frameworks like Express or Fastify, or even GraphQL with Apollo.\nThis becomes evident when you need specialized middleware or integrations. Recently, I needed fine-grained request tracing for a performance audit. With Express, I had numerous battle-tested options. With tRPC, I found myself writing custom solutions and adapters.\nWhen consistency trumps type safety If your organization has standardized on a different API style, introducing tRPC may create inconsistency in your API design patterns and documentation.\nI consulted for a company that had invested heavily in GraphQL and built internal tools around its introspection capabilities. Adding tRPC to the mix for new services would have fragmented their API approach and tooling, ultimately reducing overall productivity despite the local optimizations tRPC might have provided.\nThe LLM toolings consideration Somewhat ironically, as LLM coding tools have become increasingly competent at generating type definitions and API integrations, some of tRPC\u0026rsquo;s manual labor savings have become less significant. In projects where I\u0026rsquo;ve leveraged LLM tools like Cursor extensively, the productivity gap between hand-coding REST API integrations and using tRPC has narrowed.\nThis isn\u0026rsquo;t to say that generated code is as elegant or maintainable as tRPC\u0026rsquo;s approach. But for teams already deeply invested in AI-assisted development workflows, the relative advantage of tRPC may be diminished. Though one must be careful here, as AI-generated code can easily create its own technical debt problems.\nThinking beyond the hype The intent here isn\u0026rsquo;t to dissuade folks from using tRPC. For many projects, particularly greenfield TypeScript applications built by experienced TypeScript developers in a monorepo setup, tRPC remains an excellent choice.\nHowever, as with any technology decision, context matters. The best engineers I know don\u0026rsquo;t blindly apply the same solution to every problem. They carefully consider the specific requirements, team composition, and ecosystem constraints of each project.\n","permalink":"/posts/2024-09-27-trpc-adverse-cases-alternatives//posts/2024-09-27-trpc-adverse-cases-alternatives/","summary":"\u003cp\u003eFor web eng friends who are building TypeScript full-stack applications, chances are you\u0026rsquo;ve either used tRPC or had it enthusiastically recommended to you. And for very good reasons: tRPC has revolutionized how we build type-safe APIs, eliminating the schema definition/code generation dance that plagued earlier approaches.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve been a happy tRPC user since v9, and I\u0026rsquo;m not here to convince you to abandon it. The developer experience benefits are real. End-to-end type safety without code generation, excellent editor autocomplete, and runtime validation that just works. Plus, as the latest StackOverflow survey indicates, \u003ca href=\"https://dev.to/middleware/why-hating-typescript-in-2024-doesnt-make-sense-44e\"\u003eTypeScript developers continue to command higher salaries\u003c/a\u003e than their JavaScript counterparts.\u003c/p\u003e","title":"When tRPC Might Not Be Your Best Bet: Adverse Cases for Choosing Alternatives"},{"content":"The debate between \u0026ldquo;traditional\u0026rdquo; JavaScript frameworks like React and emerging HTML-centric approaches like HTMX has been heating up in recent months. Rather than adding to the noise with abstract comparisons, I decided to build the same interactive component using both technologies to see how they really stack up in practice.\nThe Toy Project: An Interactive Syntax Highlighter For this comparison, I wanted something more interesting than the usual todo app, but still focused enough to make a fair comparison. I settled on creating an interactive code syntax highlighter with the following features:\nA text input area where users can paste code A language selector dropdown Real-time syntax highlighting as you type A \u0026ldquo;copy code\u0026rdquo; button with visual feedback Theme switching between light and dark modes Nothing revolutionary here, but enough to showcase architectural approaches.\nThe React Approach With React, I\u0026rsquo;d naturally reach for some established libraries to handle the syntax highlighting. Prism.js or highlight.js would be good candidates, with React wrappers available for both.\nHere\u0026rsquo;s what the main component structure looks like:\nimport React, { useState, useEffect } from \u0026#39;react\u0026#39;; import { Light as SyntaxHighlighter } from \u0026#39;react-syntax-highlighter\u0026#39;; import js from \u0026#39;react-syntax-highlighter/dist/esm/languages/hljs/javascript\u0026#39;; import python from \u0026#39;react-syntax-highlighter/dist/esm/languages/hljs/python\u0026#39;; import { docco, dracula } from \u0026#39;react-syntax-highlighter/dist/esm/styles/hljs\u0026#39;; // Register languages SyntaxHighlighter.registerLanguage(\u0026#39;javascript\u0026#39;, js); SyntaxHighlighter.registerLanguage(\u0026#39;python\u0026#39;, python); const CodeHighlighter = () =\u0026gt; { const [code, setCode] = useState(\u0026#39;// Type your code here\u0026#39;); const [language, setLanguage] = useState(\u0026#39;javascript\u0026#39;); const [theme, setTheme] = useState(\u0026#39;light\u0026#39;); const [copied, setCopied] = useState(false); const copyToClipboard = () =\u0026gt; { navigator.clipboard.writeText(code); setCopied(true); setTimeout(() =\u0026gt; setCopied(false), 2000); }; return ( \u0026lt;div className={`highlighter-container ${theme}`}\u0026gt; \u0026lt;div className=\u0026#34;controls\u0026#34;\u0026gt; \u0026lt;select value={language} onChange={(e) =\u0026gt; setLanguage(e.target.value)} \u0026gt; \u0026lt;option value=\u0026#34;javascript\u0026#34;\u0026gt;JavaScript\u0026lt;/option\u0026gt; \u0026lt;option value=\u0026#34;python\u0026#34;\u0026gt;Python\u0026lt;/option\u0026gt; \u0026lt;/select\u0026gt; \u0026lt;button onClick={() =\u0026gt; setTheme(theme === \u0026#39;light\u0026#39; ? \u0026#39;dark\u0026#39; : \u0026#39;light\u0026#39;)}\u0026gt; Toggle Theme \u0026lt;/button\u0026gt; \u0026lt;button onClick={copyToClipboard}\u0026gt; {copied ? \u0026#39;Copied!\u0026#39; : \u0026#39;Copy Code\u0026#39;} \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;textarea value={code} onChange={(e) =\u0026gt; setCode(e.target.value)} className=\u0026#34;code-input\u0026#34; /\u0026gt; \u0026lt;div className=\u0026#34;preview\u0026#34;\u0026gt; \u0026lt;SyntaxHighlighter language={language} style={theme === \u0026#39;light\u0026#39; ? docco : dracula} \u0026gt; {code} \u0026lt;/SyntaxHighlighter\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; ); }; export default CodeHighlighter; This React approach gives us a clean, component-based architecture. State management is handled through React\u0026rsquo;s useState hooks, and the component re-renders whenever state changes. The syntax highlighting library does the heavy lifting for the actual code formatting.\nThe HTMX Approach Now, let\u0026rsquo;s see how we might build the same thing with HTMX. The approach is fundamentally different. Instead of building a client-side application, we\u0026rsquo;ll leverage server-side rendering with targeted DOM updates.\nFirst, our HTML structure:\n\u0026lt;div class=\u0026#34;highlighter-container\u0026#34; data-theme=\u0026#34;light\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;controls\u0026#34;\u0026gt; \u0026lt;select id=\u0026#34;language-selector\u0026#34; hx-post=\u0026#34;/highlight\u0026#34; hx-trigger=\u0026#34;change\u0026#34; hx-target=\u0026#34;#highlighted-output\u0026#34; hx-include=\u0026#34;#code-input\u0026#34;\u0026gt; \u0026lt;option value=\u0026#34;javascript\u0026#34;\u0026gt;JavaScript\u0026lt;/option\u0026gt; \u0026lt;option value=\u0026#34;python\u0026#34;\u0026gt;Python\u0026lt;/option\u0026gt; \u0026lt;/select\u0026gt; \u0026lt;button hx-post=\u0026#34;/toggle-theme\u0026#34; hx-target=\u0026#34;.highlighter-container\u0026#34; hx-swap=\u0026#34;outerHTML\u0026#34;\u0026gt; Toggle Theme \u0026lt;/button\u0026gt; \u0026lt;button hx-post=\u0026#34;/copy\u0026#34; hx-include=\u0026#34;#code-input\u0026#34; hx-target=\u0026#34;this\u0026#34; hx-swap=\u0026#34;innerHTML\u0026#34; hx-trigger=\u0026#34;click\u0026#34;\u0026gt; Copy Code \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;textarea id=\u0026#34;code-input\u0026#34; hx-post=\u0026#34;/highlight\u0026#34; hx-trigger=\u0026#34;keyup changed delay:500ms\u0026#34; hx-target=\u0026#34;#highlighted-output\u0026#34;\u0026gt; // Type your code here \u0026lt;/textarea\u0026gt; \u0026lt;div id=\u0026#34;highlighted-output\u0026#34; class=\u0026#34;preview\u0026#34;\u0026gt; \u0026lt;!-- Initial highlighted code will be loaded here --\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; Then, on the server side (using Python with Flask as an example), we\u0026rsquo;d have:\nfrom flask import Flask, request, render_template import pygments from pygments import highlight from pygments.lexers import get_lexer_by_name from pygments.formatters import HtmlFormatter app = Flask(__name__) @app.route(\u0026#39;/highlight\u0026#39;, methods=[\u0026#39;POST\u0026#39;]) def highlight_code(): code = request.form.get(\u0026#39;code-input\u0026#39;, \u0026#39;// Type your code here\u0026#39;) language = request.form.get(\u0026#39;language-selector\u0026#39;, \u0026#39;javascript\u0026#39;) try: lexer = get_lexer_by_name(language) formatter = HtmlFormatter() highlighted = highlight(code, lexer, formatter) return highlighted except Exception as e: return f\u0026#34;\u0026lt;pre\u0026gt;{code}\u0026lt;/pre\u0026gt;\u0026lt;div class=\u0026#39;error\u0026#39;\u0026gt;Error: {str(e)}\u0026lt;/div\u0026gt;\u0026#34; @app.route(\u0026#39;/toggle-theme\u0026#39;, methods=[\u0026#39;POST\u0026#39;]) def toggle_theme(): current_theme = request.form.get(\u0026#39;data-theme\u0026#39;, \u0026#39;light\u0026#39;) new_theme = \u0026#39;dark\u0026#39; if current_theme == \u0026#39;light\u0026#39; else \u0026#39;light\u0026#39; return render_template(\u0026#39;highlighter_container.html\u0026#39;, theme=new_theme, code=request.form.get(\u0026#39;code-input\u0026#39;, \u0026#39;\u0026#39;), language=request.form.get(\u0026#39;language-selector\u0026#39;, \u0026#39;javascript\u0026#39;)) @app.route(\u0026#39;/copy\u0026#39;, methods=[\u0026#39;POST\u0026#39;]) def copy_text(): # In reality, copying happens client-side via JS # This endpoint just returns the confirmation message return \u0026#34;Copied!\u0026#34; With a bit of extra JavaScript to handle the actual clipboard operation:\ndocument.addEventListener(\u0026#39;DOMContentLoaded\u0026#39;, function() { htmx.on(\u0026#39;htmx:afterSwap\u0026#39;, function(event) { if (event.detail.target.id === \u0026#39;highlighted-output\u0026#39;) { // Apply any client-side formatting or interactions here } }); // Handle actual clipboard functionality document.querySelector(\u0026#39;button[hx-post=\u0026#34;/copy\u0026#34;]\u0026#39;).addEventListener(\u0026#39;click\u0026#39;, function() { const codeText = document.querySelector(\u0026#39;#code-input\u0026#39;).value; navigator.clipboard.writeText(codeText); // HTMX will handle changing button text via the response from /copy setTimeout(function() { this.innerHTML = \u0026#39;Copy Code\u0026#39;; }.bind(this), 2000); }); }); The Differences in Approach State Management: The React version manages all state in the component, while the HTMX version delegates state management to the server with targeted requests.\nRendering Logic: React handles rendering on the client, while HTMX leverages server-side rendering with partially updated DOM elements.\nCode Organization: React centralizes UI logic in components, while HTMX distributes it between HTML attributes and server endpoints.\nPerformance Considerations Running this in production reveals some interesting performance characteristics:\nInitial Load: The HTMX version loads faster initially because it doesn\u0026rsquo;t need to download and parse a large JavaScript bundle. The React version, depending on how it\u0026rsquo;s bundled, might include 100KB+ of framework code before our application even starts.\nRuntime Performance: For this specific example, both perform similarly once loaded. The React version feels snappier for theme switching since it\u0026rsquo;s purely client-side, while the HTMX version needs a server roundtrip.\nNetwork Traffic: The HTMX version generates more HTTP requests but transfers smaller payloads (just the highlighted HTML), while the React version makes fewer requests but processes more on the client.\nMemory Usage: The React version typically uses more browser memory due to the JavaScript runtime overhead.\nDeveloper Experience Trade-offs React Land Working with both technologies on the same project revealed nuanced differences in developer experience that go beyond simple feature comparisons. In the React implementation, I found myself thinking in components from the start. The mental model feels natural once you\u0026rsquo;ve spent time with it - everything is a component with clearly defined props and state.\nThe debugging experience with React DevTools transformed my workflow. Being able to inspect component hierarchies, monitor state changes, and track renders made troubleshooting straightforward. When I needed to refactor my syntax highlighter to add the theme toggle feature, the component isolation made it remarkably easy to ensure I wasn\u0026rsquo;t breaking existing functionality.\nHTMX World With HTMX, the experience took a different shape. The learning curve felt gentler at first - after all, I was just writing HTML with some special attributes. This approach forced me to think more carefully about my API design upfront, considering what endpoints I needed and how they would respond to different requests.\nThe lack of boilerplate in HTMX was refreshing. Implementing the \u0026ldquo;copy code\u0026rdquo; button with a visual confirmation took just a few lines of markup rather than setting up state, effects, and event handlers. When building the language selector, I didn\u0026rsquo;t need to worry about managing a separate piece of state - the server simply responded with newly highlighted code when the selection changed.\nOne unexpected challenge with HTMX was debugging. Without a dedicated DevTools extension, I found myself relying more on browser network inspection to understand what was happening with my requests and responses.\nLLM Recognition As noted in the HTMX essay about Gumroad\u0026rsquo;s technology choices, AI tooling support is currently stronger for React. When I got stuck, LLM tools was noticeably more helpful with the React implementation than with HTMX-specific syntax and patterns.\nWhen to Choose Which? After building the same component both ways, I\u0026rsquo;ve developed a clearer sense of when each technology shines. React excels in scenarios where you\u0026rsquo;re crafting complex, stateful applications that require sophisticated client-side interactions. If your application needs to maintain numerous interconnected states, perform complex data manipulations in the browser, or provide rich interactive experiences without frequent server communication, React provides a robust framework to manage this complexity. This is especially true if your team already has React expertise.\nHTMX, on the other hand, I found it particularly valuable when working with applications where server-side rendering already plays a significant role. The syntax highlighter demonstrated how HTMX elegantly bridges the gap between server-generated HTML and interactive user experiences. If your backend already does the heavy lifting of data processing and presentation, HTMX provides the interactivity layer without requiring a complete client-side application architecture.\nThe decision ultimately comes down to the specific demands of your project, your team\u0026rsquo;s expertise, and your architectural preferences. For teams tired of JavaScript framework complexity and looking for a simpler mental model, HTMX offers a refreshing alternative that still delivers modern interactivity. For applications that require complex state management and rich client-side interactions, React\u0026rsquo;s mature ecosystem continues to provide tremendous value.\nNeither approach is universally \u0026ldquo;better\u0026rdquo;. They\u0026rsquo;re optimized for different contexts.\nI\u0026rsquo;ve increasingly found myself reaching for HTMX for projects where I would have automatically used React a year ago. It\u0026rsquo;s about having another powerful tool in the toolbox that lets me build web applications with less complexity when appropriate.\n","permalink":"/posts/2024-08-02-code-syntax-highlighter-htmx-react//posts/2024-08-02-code-syntax-highlighter-htmx-react/","summary":"\u003cp\u003eThe debate between \u0026ldquo;traditional\u0026rdquo; JavaScript frameworks like React and emerging HTML-centric approaches like HTMX has been heating up in recent months. Rather than adding to the noise with abstract comparisons, I decided to build the same interactive component using both technologies to see how they really stack up in practice.\u003c/p\u003e\n\u003ch2 id=\"the-toy-project-an-interactive-syntax-highlighter\"\u003eThe Toy Project: An Interactive Syntax Highlighter\u003c/h2\u003e\n\u003cp\u003eFor this comparison, I wanted something more interesting than the usual todo app, but still focused enough to make a fair comparison. I settled on creating an interactive code syntax highlighter with the following features:\u003c/p\u003e","title":"Building an Interactive Code Syntax Highlighter with HTMX vs React"},{"content":"Introduction When building real-time web applications, WebSockets often grab the spotlight. They promise robust, bidirectional communication and are frequently the default choice discussed in architecture meetings and tech talks. Meanwhile, Server-Sent Events (SSE) sits quietly in the corner, a powerful but frequently overlooked standard that deserves more serious consideration than it typically receives, especially for specific, common use cases.\nI recently worked on a analytics project where the initial requirement was \u0026ldquo;real-time updates, so we need WebSockets.\u0026rdquo; It\u0026rsquo;s a common refrain. However, after digging into the actual need\u0026hellip; streaming analytics updates to the browser with absolutely no requirement for client-to-server messages over that channel, we pivoted to SSE. The outcome? A noticeably simpler backend, less resource consumption, and a more straightforward codebase to maintain.\nThis experience was a great reminder that we should regularly challenge our default choices. Let\u0026rsquo;s dive into why SSE deserves a more prominent place in your technical toolkit and explore the scenarios where it genuinely outshines its more famous sibling.\nWhat Exactly Are Server-Sent Events? Server-Sent Events (SSE) is a standard technology enabling servers to push updates to web clients over a single, long-lived HTTP connection. Unlike the request-response model or even long-polling, the server initiates the data transmission whenever new information is ready. It’s natively supported in browsers via the EventSource JavaScript API and relies on the text/event-stream MIME type on the server side to deliver a stream of UTF-8 encoded text data.\nHere’s the gist of the fundamental idea in action:\nClient-Side (JavaScript):\nconst eventSource = new EventSource(\u0026#39;/stream-updates\u0026#39;); // Connect to the SSE endpoint eventSource.onmessage = (event) =\u0026gt; { // Standard event handler for unnamed messages console.log(\u0026#39;New data:\u0026#39;, event.data); try { const jsonData = JSON.parse(event.data); updateDashboard(jsonData); // Process the received data } catch (e) { console.error(\u0026#39;Failed to parse SSE data:\u0026#39;, e); } }; // You can also listen for named events eventSource.addEventListener(\u0026#39;user_update\u0026#39;, (event) =\u0026gt; { console.log(\u0026#39;User specific update:\u0026#39;, event.data); }); eventSource.onerror = (error) =\u0026gt; { console.error(\u0026#39;EventSource failed:\u0026#39;, error); // The browser attempts auto-reconnect by default, but add custom logic if needed if (eventSource.readyState === EventSource.CLOSED) { console.log(\u0026#39;Connection was closed definitively.\u0026#39;); // Perhaps implement exponential backoff here for robustness } // Note: The browser handles reconnection automatically unless the server sends a non-200 response or the wrong content-type. }; // Remember to close the connection when it\u0026#39;s no longer needed // eventSource.close(); Server-Side (Node.js with Express):\nimport express from \u0026#39;express\u0026#39;; const app = express(); const PORT = 3000; app.get(\u0026#39;/stream-updates\u0026#39;, (req, res) =\u0026gt; { // Set essential headers for SSE res.setHeader(\u0026#39;Content-Type\u0026#39;, \u0026#39;text/event-stream\u0026#39;); res.setHeader(\u0026#39;Cache-Control\u0026#39;, \u0026#39;no-cache\u0026#39;); res.setHeader(\u0026#39;Connection\u0026#39;, \u0026#39;keep-alive\u0026#39;); res.flushHeaders(); // Flush headers immediately // Send an initial connection confirmation (optional) res.write(\u0026#39;data: {\u0026#34;message\u0026#34;: \u0026#34;Connected to SSE stream!\u0026#34;}\\n\\n\u0026#39;); // Simulate sending data every second const intervalId = setInterval(() =\u0026gt; { const data = { timestamp: new Date().toISOString(), value: Math.random() }; // Format the message according to SSE spec: `data: \u0026lt;json_string\u0026gt;\\n\\n` res.write(`data: ${JSON.stringify(data)}\\n\\n`); }, 1000); // Clean up when the client disconnects req.on(\u0026#39;close\u0026#39;, () =\u0026gt; { console.log(\u0026#39;Client disconnected, clearing interval.\u0026#39;); clearInterval(intervalId); res.end(); // Ensure the response stream is properly closed }); }); app.listen(PORT, () =\u0026gt; { console.log(`SSE server running at http://localhost:${PORT}`); }); The relative simplicity here is a defining trait.\nWhy SSE Often Gets Overshadowed WebSockets dominate the real-time web dev talking points primarily due to their full-duplex communication. The ability for both client and server to send messages independently over the same connection is powerful and essential for truly interactive applications like chat rooms, collaborative editing tools, or multiplayer games.\nThis bidirectional capability often leads developers to overlook SSE because:\nWebSockets\u0026rsquo; Versatility: Its two-way nature makes it seem like a \u0026ldquo;one size fits all\u0026rdquo; real-time solution, even when bidirectional flow isn\u0026rsquo;t strictly needed. Perceived Limitations: SSE\u0026rsquo;s unidirectional nature (server-to-client only) and its limitation to UTF-8 text data can appear restrictive at first glance compared to WebSockets\u0026rsquo; support for binary data. Mature Ecosystem: WebSocket libraries like Socket.IO are well-established, offering features like automatic fallback mechanisms (to long-polling) and handling nuances of connection management, which can seem more robust initially. Lack of Awareness: Some developers simply haven\u0026rsquo;t encountered scenarios where SSE\u0026rsquo;s specific strengths make it the more appropriate choice, or they aren\u0026rsquo;t familiar with its built-in conveniences. The Quiet Technical Advantages of SSE SSE\u0026rsquo;s design choices offer compelling benefits in the right context:\nSimplicity and HTTP Compatibility: SSE runs over standard HTTP/S. This means it generally works out-of-the-box with existing infrastructure (e.g. load balancers, proxies, firewalls) without the special configuration sometimes needed for the ws:// or wss:// protocols. Implementation is often quicker, leveraging the native EventSource API without mandatory external libraries. Automatic Reconnection: This is a killer feature. The EventSource API handles connection drops automatically. If the connection breaks, the browser will attempt to reconnect periodically. It even sends the Last-Event-ID HTTP header (if the server sends event IDs), allowing the server to potentially resend missed messages. While you might want custom backoff logic for production, the baseline reliability is built-in, unlike raw WebSockets where you must implement this yourself. Lower Overhead (for One-Way): For server-to-client push, SSE typically involves less protocol overhead than establishing and maintaining a WebSocket connection, especially when considering the complexity handled by libraries like Socket.IO to ensure reliability. Server Simplicity: Managing unidirectional streams can be less complex on the server side compared to handling bidirectional WebSocket connections, potentially leading to lower resource usage for equivalent numbers of connected clients receiving updates. Understanding the Limitations No technology is perfect, and SSE has constraints:\nUnidirectional: As stressed before, data flows only from server to client. If you need client-to-server communication over the same channel, SSE is not the right tool. Text-Only: SSE natively supports only UTF-8 encoded text. Binary data requires workarounds like Base64 encoding (inefficient) or sending a notification via SSE to trigger a separate fetch/XHR request for the binary asset. Browser Connection Limits (HTTP/1.1): Older HTTP/1.1 setups often face a browser limit of around 6 concurrent HTTP connections per domain. Opening many SSE connections could exhaust this pool. Thankfully, HTTP/2 largely mitigates this, typically allowing 100+ concurrent streams over a single TCP connection. Ensure your infrastructure supports HTTP/2 if multiple streams are needed. No Native IE Support: While all modern browsers (Chrome, Firefox, Safari, Edge) have excellent support, Internet Explorer never implemented EventSource. Polyfills exist but add complexity if IE support is a hard requirement (which is increasingly rare). Industry Use Cases Where SSE Shines SSE is far from niche; it powers real-time features in many large-scale applications:\nLive Dashboards and Analytics: This is a classic sweet spot. Think monitoring systems, e-commerce sales trackers, or IoT sensor displays. Data flows one way to update visualizations. Shopify famously detailed their use of SSE for the high-traffic Black Friday Cyber Monday (BFCM) Live Map, citing its simplicity and HTTP compatibility as key advantages over polling or even WebSockets for that specific read-heavy use case.\n// Server-side (Conceptual - pushing sales data) eventEmitter.on(\u0026#39;new_sale\u0026#39;, (saleData) =\u0026gt; { clients.forEach(clientRes =\u0026gt; { clientRes.write(`event: sale\\ndata: ${JSON.stringify(saleData)}\\n\\n`); }); }); // Client-side eventSource.addEventListener(\u0026#39;sale\u0026#39;, (event) =\u0026gt; { const sale = JSON.parse(event.data); updateSalesChart(sale); }); Real-Time Notifications: Pushing alerts for new messages, social media interactions (likes, comments), system status updates, or build pipeline progress. The user receives information passively.\n// Server-side (Conceptual - pushing notification) function pushNotification(userId, notification) { const clientRes = findClientResponseForUser(userId); if (clientRes) { clientRes.write(`event: notification\\ndata: ${JSON.stringify(notification)}\\n\\n`); } } // Client-side eventSource.addEventListener(\u0026#39;notification\u0026#39;, (event) =\u0026gt; { const notification = JSON.parse(event.data); displayNotificationToast(notification.message); }); News Feeds and Stock Tickers: Streaming breaking news headlines or live market data. Users consume information as it becomes available. The low latency and minimal overhead are beneficial here.\nAI Response Streaming: Platforms like ChatGPT often use SSE to stream text responses/delta incrementally. This provides the \u0026ldquo;typing\u0026rdquo; effect, improving perceived responsiveness as the model generates output, rather than waiting for the entire response.\n// Server-side (Conceptual - streaming text chunks) async function streamAiResponse(clientRes, prompt) { const stream = await getAIResponseStream(prompt); // Your AI model call for await (const chunk of stream) { clientRes.write(`data: ${JSON.stringify({ textChunk: chunk })}\\n\\n`); } clientRes.write(\u0026#39;event: done\\ndata: {}\\n\\n\u0026#39;); // Signal completion } // Client-side let fullResponse = \u0026#39;\u0026#39;; eventSource.onmessage = (event) =\u0026gt; { const data = JSON.parse(event.data); fullResponse += data.textChunk; updateChatArea(fullResponse); }; eventSource.addEventListener(\u0026#39;done\u0026#39;, () =\u0026gt; { eventSource.close(); console.log(\u0026#39;AI response complete.\u0026#39;); }); Live Activity Feeds: Showing updates in collaborative environments (e.g., \u0026ldquo;User X just commented\u0026rdquo;) or social feeds where the primary flow is consuming new information. GitHub uses SSE for some live updates in their UI.\nSSE in Modern Architectures: Serverless and Edge The rise of serverless functions (like AWS Lambda, Google Cloud Functions) and edge computing (like Cloudflare Workers, Vercel Edge Functions) adds another dimension to the SSE vs. WebSockets debate. Maintaining long-lived, stateful WebSocket connections can be complex or costly in these environments, often requiring dedicated infrastructure or services.\nSSE, being built on HTTP requests/responses (albeit long-lived ones), often maps more naturally to the execution models of these platforms. Many platforms now offer streaming response APIs that make implementing SSE straightforward. Here\u0026rsquo;s a conceptual example for Cloudflare Workers:\n// Example Cloudflare Worker for SSE export default { async fetch(request) { if (new URL(request.url).pathname !== \u0026#39;/events\u0026#39;) { return new Response(\u0026#39;Not Found\u0026#39;, { status: 404 }); } let intervalId; const stream = new ReadableStream({ start(controller) { controller.enqueue(\u0026#39;data: {\u0026#34;message\u0026#34;: \u0026#34;Edge stream started\u0026#34;}\\n\\n\u0026#39;); // Simulate pushing data from the edge intervalId = setInterval(() =\u0026gt; { const data = { edgeTime: new Date().toISOString() }; controller.enqueue(`data: ${JSON.stringify(data)}\\n\\n`); }, 2000); }, cancel() { console.log(\u0026#39;Edge stream cancelled.\u0026#39;); clearInterval(intervalId); } }); // Return the stream in the Response return new Response(stream, { headers: { \u0026#39;Content-Type\u0026#39;: \u0026#39;text/event-stream\u0026#39;, \u0026#39;Cache-Control\u0026#39;: \u0026#39;no-cache\u0026#39;, \u0026#39;Connection\u0026#39;: \u0026#39;keep-alive\u0026#39;, }, }); } }; This can be significantly simpler to deploy and manage on compatible edge platforms compared to stateful WebSocket solutions.\nEnhancing SSE with Type Safety A common developer desire is type safety across the stack. While basic SSE is typeless text, tools are emerging to bridge this gap. Libraries like ts-sse allow you to define your event contracts in TypeScript, ensuring consistency between server emission and client consumption.\n// Shared types (e.g., in a shared package) interface PriceUpdate { symbol: string; price: number; } // Server-side (using ts-sse types) import { type ServerSentEvent, createSSE } from \u0026#39;ts-sse\u0026#39;; const sse = createSSE\u0026lt;ServerSentEvent\u0026lt;{ priceUpdate: PriceUpdate }\u0026gt;\u0026gt;(); app.get(\u0026#39;/typed-events\u0026#39;, (req, res) =\u0026gt; { sse.init(req, res); // Sets up headers and connection // Example: Sending a typed event const update: PriceUpdate = { symbol: \u0026#39;ACME\u0026#39;, price: 123.45 }; sse.send(\u0026#39;priceUpdate\u0026#39;, update); }); // Client-side (using ts-sse/client) import { EventSourceClient } from \u0026#39;ts-sse/client\u0026#39;; const client = new EventSourceClient\u0026lt;{ priceUpdate: PriceUpdate }\u0026gt;(\u0026#39;/typed-events\u0026#39;); client.on(\u0026#39;priceUpdate\u0026#39;, (eventData) =\u0026gt; { // eventData is correctly typed as PriceUpdate console.log(`Symbol: ${eventData.symbol}, Price: ${eventData.price}`); updateStockTickerUI(eventData); }); This significantly improves the developer experience for more complex SSE implementations.\nBest Practices for Robust SSE To make your SSE implementation production-ready:\nSet Headers Correctly: Always include Content-Type: text/event-stream, Cache-Control: no-cache, and Connection: keep-alive. Use Event IDs: Send an id: \u0026lt;your_unique_id\u0026gt;\\n field with messages. This sets the Last-Event-ID which the browser sends on reconnect, allowing you to resume the stream intelligently. Implement Keep-Alives: Send periodic comments (lines starting with a colon, e.g., : keep-alive\\n\\n) every 15-30 seconds. This prevents intermediaries (proxies, load balancers) from timing out the seemingly idle connection. Graceful Shutdown: Ensure the server closes the connection correctly when done, or responds with a non-200 status / non-event-stream content type if an error occurs, signaling the client not to reconnect automatically. Client-Side Error Handling: Use the onerror handler to detect issues. Implement custom reconnection logic (like exponential backoff with jitter) if the default browser behavior isn\u0026rsquo;t sufficient for your reliability needs. Monitor Connection Limits: Be mindful of browser connection limits, especially if not using HTTP/2, and consider multiplexing different event types over a single SSE connection if needed. I might be covering implementation examples to improve robustness of SSE in another blog post, stay tuned! (Update 20250130: it\u0026rsquo;s here! Advanced Patterns for Production-Ready SSE)\nConclusion: Choose the Right Tool WebSockets are indispensable for bidirectional, low-latency interaction. But for the surprisingly common scenario of unidirectional server-to-client data streaming, Server-Sent Events offer a compelling blend of simplicity, efficiency, and robustness, especially leveraging native browser features and standard HTTP infrastructure. By understanding the strengths and limitations of SSE, we can avoid the trap of defaulting to the most feature-rich option when a simpler, more focused tool is a better fit.\n","permalink":"/posts/2024-06-17-server-sent-events-real-time-apps//posts/2024-06-17-server-sent-events-real-time-apps/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eWhen building real-time web applications, WebSockets often grab the spotlight. They promise robust, bidirectional communication and are frequently the default choice discussed in architecture meetings and tech talks. Meanwhile, \u003cstrong\u003eServer-Sent Events (SSE)\u003c/strong\u003e sits quietly in the corner, a powerful but frequently overlooked standard that deserves more serious consideration than it typically receives, especially for specific, common use cases.\u003c/p\u003e\n\u003cp\u003eI recently worked on a analytics project where the initial requirement was \u0026ldquo;real-time updates, so we need WebSockets.\u0026rdquo; It\u0026rsquo;s a common refrain. However, after digging into the actual need\u0026hellip; streaming analytics updates \u003cem\u003eto\u003c/em\u003e the browser with absolutely no requirement for client-to-server messages over that channel, we pivoted to SSE. The outcome? A noticeably simpler backend, less resource consumption, and a more straightforward codebase to maintain.\u003c/p\u003e","title":"The Quiet Power of Server-Sent Events for Real-Time Apps"},{"content":"With the Bun reached 1.0 last year and recent release of Bun 1.1, JavaScript developers have a lot to be excited about. Windows support is a big headline feature, but there are several less-heralded improvements that make daily development significantly smoother. Today, we\u0026rsquo;ll explore two standout features: the built-in SQLite database and the Bun Shell API.\nTo showcase these features in action, we\u0026rsquo;ll build a simple coffee consumption tracker. It\u0026rsquo;s a toy example that demonstrates how these tools simplify everyday development tasks.\nThe Project: A Coffee Consumption Tracker This utility will track what coffee you drink throughout the day, recording both the type and amount. It will calculate your caffeine intake and provide some basic statistics.\nHere\u0026rsquo;s the complete code that we\u0026rsquo;ll break down and explain:\nimport { Database } from \u0026#34;bun:sqlite\u0026#34;; import { $ } from \u0026#34;bun\u0026#34;; // Initialize our database const db = new Database(\u0026#34;coffee-tracker.sqlite\u0026#34;); // Create our tables if they don\u0026#39;t exist db.run(` CREATE TABLE IF NOT EXISTS coffee_types ( id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT UNIQUE, caffeine_mg INTEGER ); CREATE TABLE IF NOT EXISTS coffee_log ( id INTEGER PRIMARY KEY AUTOINCREMENT, coffee_id INTEGER, cups INTEGER, timestamp TEXT, FOREIGN KEY(coffee_id) REFERENCES coffee_types(id) ); `); // Seed some default coffee types if none exist const coffeeCount = db.query(\u0026#34;SELECT COUNT(*) as count FROM coffee_types\u0026#34;).get().count; if (coffeeCount === 0) { db.run(` INSERT INTO coffee_types (name, caffeine_mg) VALUES (\u0026#39;Espresso\u0026#39;, 63), (\u0026#39;Drip Coffee\u0026#39;, 95), (\u0026#39;Cold Brew\u0026#39;, 125), (\u0026#39;Americano\u0026#39;, 77), (\u0026#39;Cappuccino\u0026#39;, 63); `); console.log(\u0026#34;🌱 Seeded default coffee types!\u0026#34;); } // Helper to clear the terminal async function clearScreen() { if (process.platform === \u0026#34;win32\u0026#34;) { await $`cls`.quiet(); } else { await $`clear`.quiet(); } } // Display a fancy ASCII art title function displayTitle() { console.log(` ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ COFFEE TRACKER - POWERED BY BUN SQLITE ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ ☕️ `); } // List available coffee types function listCoffeeTypes() { const coffeeTypes = db.query(\u0026#34;SELECT id, name, caffeine_mg FROM coffee_types\u0026#34;).all(); console.log(\u0026#34;\\nAvailable coffee types:\u0026#34;); coffeeTypes.forEach((coffee, index) =\u0026gt; { console.log(` ${index + 1}. ${coffee.name} (${coffee.caffeine_mg}mg caffeine)`); }); return coffeeTypes; } // Log a coffee function logCoffee(coffeeId, cups) { const timestamp = new Date().toISOString(); db.run(\u0026#34;INSERT INTO coffee_log (coffee_id, cups, timestamp) VALUES (?, ?, ?)\u0026#34;, [coffeeId, cups, timestamp]); console.log(`\\n✅ Logged ${cups} cup(s)!`); } // Show today\u0026#39;s stats function showTodayStats() { const today = new Date().toISOString().split(\u0026#39;T\u0026#39;)[0]; const stats = db.query(` SELECT c.name, SUM(l.cups) as total_cups, SUM(l.cups * c.caffeine_mg) as total_caffeine FROM coffee_log l JOIN coffee_types c ON l.coffee_id = c.id WHERE l.timestamp LIKE \u0026#39;${today}%\u0026#39; GROUP BY c.name `).all(); console.log(\u0026#34;\\n📊 Today\u0026#39;s Coffee Stats:\u0026#34;); if (stats.length === 0) { console.log(\u0026#34; No coffee logged today yet.\u0026#34;); return; } let totalCups = 0; let totalCaffeine = 0; stats.forEach((stat) =\u0026gt; { console.log(` ${stat.name}: ${stat.total_cups} cup(s) - ${stat.total_caffeine}mg caffeine`); totalCups += stat.total_cups; totalCaffeine += stat.total_caffeine; }); console.log(`\\n Total: ${totalCups} cup(s) - ${totalCaffeine}mg caffeine`); // Add a fun caffeine warning if (totalCaffeine \u0026gt; 400) { console.log(\u0026#34;\\n⚠️ Caffeine warning: You\u0026#39;ve exceeded the recommended daily limit of 400mg!\u0026#34;); } else { const remaining = 400 - totalCaffeine; console.log(`\\n✅ You can safely drink ${Math.floor(remaining / 63)} more espresso shots today.`); } } // Export database to a backup file async function exportDatabase() { const timestamp = new Date().toISOString().replace(/:/g, \u0026#39;-\u0026#39;); const backupName = `coffee-backup-${timestamp}.sqlite`; // Use Bun\u0026#39;s file APIs for a fast copy const dbContent = await Bun.file(\u0026#34;coffee-tracker.sqlite\u0026#34;).arrayBuffer(); await Bun.write(backupName, dbContent); console.log(`\\n✅ Database exported to ${backupName}`); } // Import from a backup file async function importDatabase() { // List available backups const stdout = await $`ls -1 coffee-backup-*.sqlite 2\u0026gt;/dev/null || echo \u0026#34;No backups found\u0026#34;`.text(); const backups = stdout.trim().split(\u0026#39;\\n\u0026#39;); if (backups[0] === \u0026#34;No backups found\u0026#34;) { console.log(\u0026#34;\\n❌ No backup files found.\u0026#34;); return; } console.log(\u0026#34;\\nAvailable backups:\u0026#34;); backups.forEach((backup, i) =\u0026gt; { console.log(` ${i+1}. ${backup}`); }); const choice = parseInt(prompt(\u0026#34;\\nSelect backup to import (0 to cancel): \u0026#34;) || \u0026#34;0\u0026#34;); if (choice === 0 || choice \u0026gt; backups.length) { console.log(\u0026#34;Import cancelled.\u0026#34;); return; } const selectedBackup = backups[choice-1]; // Close current database db.close(); // Create a backup of current database const currentDbContent = await Bun.file(\u0026#34;coffee-tracker.sqlite\u0026#34;).arrayBuffer(); await Bun.write(\u0026#34;coffee-tracker-before-import.sqlite\u0026#34;, currentDbContent); // Import selected backup const backupContent = await Bun.file(selectedBackup).arrayBuffer(); await Bun.write(\u0026#34;coffee-tracker.sqlite\u0026#34;, backupContent); console.log(`\\n✅ Imported database from ${selectedBackup}`); console.log(\u0026#34; (Your previous database was backed up to coffee-tracker-before-import.sqlite)\u0026#34;); // Restart the app console.log(\u0026#34;\\n🔄 Restarting app to apply changes...\u0026#34;); await new Promise(r =\u0026gt; setTimeout(r, 2000)); // In a real app, you\u0026#39;d restart more elegantly process.exit(0); } // Main menu async function main() { while (true) { await clearScreen(); displayTitle(); console.log(\u0026#34;\\nWhat would you like to do?\u0026#34;); console.log(\u0026#34;1. Log a coffee\u0026#34;); console.log(\u0026#34;2. View today\u0026#39;s stats\u0026#34;); console.log(\u0026#34;3. Export database\u0026#34;); console.log(\u0026#34;4. Import database\u0026#34;); console.log(\u0026#34;5. Exit\u0026#34;); const choice = prompt(\u0026#34;Enter your choice (1-5): \u0026#34;); if (choice === \u0026#34;1\u0026#34;) { const coffeeTypes = listCoffeeTypes(); const coffeeChoice = parseInt(prompt(`\\nSelect coffee type (1-${coffeeTypes.length}): `) || \u0026#34;0\u0026#34;); if (coffeeChoice \u0026gt; 0 \u0026amp;\u0026amp; coffeeChoice \u0026lt;= coffeeTypes.length) { const cups = parseInt(prompt(\u0026#34;How many cups? \u0026#34;) || \u0026#34;1\u0026#34;); logCoffee(coffeeTypes[coffeeChoice - 1].id, cups); await new Promise(r =\u0026gt; setTimeout(r, 1500)); } } else if (choice === \u0026#34;2\u0026#34;) { showTodayStats(); await new Promise(r =\u0026gt; setTimeout(r, 5000)); } else if (choice === \u0026#34;3\u0026#34;) { await exportDatabase(); await new Promise(r =\u0026gt; setTimeout(r, 2000)); } else if (choice === \u0026#34;4\u0026#34;) { await importDatabase(); } else if (choice === \u0026#34;5\u0026#34;) { console.log(\u0026#34;\\nThanks for using Coffee Tracker! ☕️\u0026#34;); break; } } } // Run the app await main(); Let\u0026rsquo;s dive into the main features showcased by this code.\nFeature 1: Built-in SQLite Integration One of the most practical features in Bun is its built-in SQLite support, eliminating the need for external database drivers or complex setups.\nSetting Up the Database import { Database } from \u0026#34;bun:sqlite\u0026#34;; // Initialize our database const db = new Database(\u0026#34;coffee-tracker.sqlite\u0026#34;); Notice how clean this is - no need to install any packages! The database integration is part of the Bun runtime, accessible through the bun:sqlite import.\nCreating Tables with Multi-Statement Queries Bun 1.1 enhanced its SQLite support by allowing multiple SQL statements in a single call:\ndb.run(` CREATE TABLE IF NOT EXISTS coffee_types ( id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT UNIQUE, caffeine_mg INTEGER ); CREATE TABLE IF NOT EXISTS coffee_log ( id INTEGER PRIMARY KEY AUTOINCREMENT, coffee_id INTEGER, cups INTEGER, timestamp TEXT, FOREIGN KEY(coffee_id) REFERENCES coffee_types(id) ); `); Previously, you\u0026rsquo;d need to execute each statement separately, but Bun 1.1 lets you send an entire schema at once. This makes database initialization more concise and readable.\nQuerying Data The querying API is intuitive and familiar to SQL developers:\n// Simple query with .get() to fetch a single row const coffeeCount = db.query(\u0026#34;SELECT COUNT(*) as count FROM coffee_types\u0026#34;).get().count; // Using .all() to fetch multiple rows const coffeeTypes = db.query(\u0026#34;SELECT id, name, caffeine_mg FROM coffee_types\u0026#34;).all(); The .get() method returns a single row, while .all() returns an array of rows. Bun automatically converts the result to JavaScript objects with properly named properties.\nParameterized Queries For data insertion, we use parameterized queries to prevent SQL injection:\ndb.run(\u0026#34;INSERT INTO coffee_log (coffee_id, cups, timestamp) VALUES (?, ?, ?)\u0026#34;, [coffeeId, cups, timestamp]); The placeholder ? syntax will be familiar to anyone who\u0026rsquo;s used SQLite before, making the transition to Bun\u0026rsquo;s implementation seamless.\nFeature 2: Bun Shell API The other standout feature is Bun\u0026rsquo;s shell API, which provides a clean way to interact with the system shell from JavaScript.\nCross-Platform Shell Commands The $ template tag lets you execute shell commands directly:\nimport { $ } from \u0026#34;bun\u0026#34;; // Helper to clear the terminal async function clearScreen() { if (process.platform === \u0026#34;win32\u0026#34;) { await $`cls`.quiet(); } else { await $`clear`.quiet(); } } This concise syntax replaces clunky child_process.spawn() calls. The .quiet() method suppresses output to the console, but you can still access the result.\nCapturing Command Output Looking at our import function, we see how to capture and parse command output:\nconst stdout = await $`ls -1 coffee-backup-*.sqlite 2\u0026gt;/dev/null || echo \u0026#34;No backups found\u0026#34;`.text(); const backups = stdout.trim().split(\u0026#39;\\n\u0026#39;); The .text() method returns the command\u0026rsquo;s stdout as a string. You can also use:\n.json() for parsing JSON output .arrayBuffer() for binary data .blob() for web-compatible binary data Error Handling In Bun 1.1, the shell API will automatically throw an error if a command exits with a non-zero status code. This makes error handling intuitive:\ntry { await $`some-command-that-might-fail`; } catch (error) { console.error(\u0026#34;Command failed:\u0026#34;, error.message); } If you don\u0026rsquo;t want this behavior, you can use .throws(false) to prevent exceptions:\nconst { exitCode } = await $`risky-command`.throws(false); if (exitCode !== 0) { // Handle error case } Putting It All Together Our coffee tracker demonstrates these features working together to create a seamless developer experience:\nWe create and seed a SQLite database without any external dependencies We use the shell API to handle cross-platform terminal operations We implement database backup and restore using both Bun\u0026rsquo;s file API and shell commands This might look like a trivial example, but it showcases a significant improvement in the JavaScript toolchain. Before Bun, you would need:\nA SQLite driver package (like better-sqlite3) Additional packages for shell commands (like execa or shelljs) Careful configuration for cross-platform compatibility Taking It Further: Database Import Feature Bun 1.1 also introduced the ability to import SQLite databases directly as modules. Let\u0026rsquo;s extend our coffee tracker with a separate reporting script:\n// coffee-report.js import db from \u0026#34;./coffee-tracker.sqlite\u0026#34; with { type: \u0026#34;sqlite\u0026#34; }; function generateCoffeeReport() { // Get lifetime stats const stats = db.query(` SELECT COUNT(*) as total_entries, SUM(l.cups) as total_cups, ROUND(SUM(l.cups * c.caffeine_mg)) as total_caffeine FROM coffee_log l JOIN coffee_types c ON l.coffee_id = c.id `).get(); console.log(\u0026#34;☕️ COFFEE CONSUMPTION REPORT ☕️\u0026#34;); console.log(\u0026#34;===============================\u0026#34;); console.log(`Total entries: ${stats.total_entries}`); console.log(`Total cups consumed: ${stats.total_cups}`); console.log(`Total caffeine consumed: ${stats.total_caffeine}mg`); // Get consumption by coffee type const byType = db.query(` SELECT c.name, SUM(l.cups) as cups, ROUND(SUM(l.cups * c.caffeine_mg)) as caffeine FROM coffee_log l JOIN coffee_types c ON l.coffee_id = c.id GROUP BY c.name ORDER BY cups DESC `).all(); console.log(\u0026#34;\\nCONSUMPTION BY TYPE\u0026#34;); console.log(\u0026#34;===================\u0026#34;); byType.forEach(type =\u0026gt; { console.log(`${type.name}: ${type.cups} cups (${type.caffeine}mg caffeine)`); }); } generateCoffeeReport(); With this simple script, you can generate reports from your coffee database without duplicating any database setup code.\nThe Best Part: Single-File Executable Finally, Bun 1.1 enhanced its ability to compile SQLite databases into executables. For example, we could create a single distributable file:\n// coffee-tracker-app.js import db from \u0026#34;./coffee-tracker.sqlite\u0026#34; with { type: \u0026#34;sqlite\u0026#34;, embed: \u0026#34;true\u0026#34; }; // ... rest of app code ... Then compile it:\nbun build --compile coffee-tracker-app.js This creates a single executable file with both the application code and the database embedded inside, perfect for distribution to users who don\u0026rsquo;t have Bun installed.\nConclusion Bun continues to evolve as an attractive alternative to Node.js, especially for developers who value simplicity and performance. These features in version 1.1 strengthen the case for including Bun in your JavaScript toolkit.\nTry the coffee tracker for yourself, and you might find these features becoming a staple in your development workflow!\n","permalink":"/posts/2024-05-04-coffee-tracker-bun-updates//posts/2024-05-04-coffee-tracker-bun-updates/","summary":"\u003cp\u003eWith the Bun reached 1.0 last year and recent release of Bun 1.1, JavaScript developers have a lot to be excited about. Windows support is a big headline feature, but there are several less-heralded improvements that make daily development significantly smoother. Today, we\u0026rsquo;ll explore two standout features: the built-in SQLite database and the Bun Shell API.\u003c/p\u003e\n\u003cp\u003eTo showcase these features in action, we\u0026rsquo;ll build a simple coffee consumption tracker. It\u0026rsquo;s a toy example that demonstrates how these tools simplify everyday development tasks.\u003c/p\u003e","title":"Coffee Tracker: Exploring Bun's SQLite Integration, Shell API, and v1.1 updates"},{"content":"Introduction When building full-stack applications with Next.js, one of the architectural decisions we\u0026rsquo;ve been grappling with lately is choosing between Server Actions and API Routes for server-side logic. With Server Actions becoming stable in Next.js 14, I\u0026rsquo;ve been experimenting with both approaches across different projects. I thought I\u0026rsquo;d share some observations from my recent experiences. Your mileage may vary, and I\u0026rsquo;d love to learn on how others are approaching this decision!\nUnderstanding the Contenders Before diving into comparisons, here\u0026rsquo;s my current understanding of the two approaches:\nServer Actions are functions that run on the server but can be called directly from client components. They was introduced in Next.js 13.4 and became stable in Next.js 14 offering a way to perform server-side logic without explicitly creating API endpoints.\nAPI Routes are the more traditional approach, where you define explicit endpoints in your /api directory (or using the App Router\u0026rsquo;s route handlers) that can be called using regular HTTP requests.\nThe Magic Behind Server Actions When you mark a function with the 'use server' directive, either at the top of an async function or at the top of a file, Next.js does something interesting with it. During the build process, it identifies these server actions and includes them in your JavaScript bundle but not the implementation, just the \u0026ldquo;proxy\u0026rdquo; functions that know how to call back to the server.\nHere\u0026rsquo;s what actually happens when a client component calls a server action, essentially a RPC variant:\nYour client code calls what looks like a normal function Next.js intercepts this call and serializes all the arguments you passed Under the hood, a POST request is fired off to a special Next.js endpoint, carrying your serialized data and metadata that identifies which Server Action to run The server receives this request, identifies the correct Server Action, deserializes your arguments, and executes the actual server-side code After execution, the server serializes the return value and sends it back to the client The client code receives and uses the result as if it had just called a regular function If you inspect your network tab when a Server Action runs, you\u0026rsquo;ll see these special POST requests happening.\nSome Thoughts on Server Actions 1. Code Organization Feels Different Here\u0026rsquo;s a simple example comparing both approaches:\n// Traditional API Route approach // app/api/submit-form/route.js export async function POST(request) { const data = await request.json(); // validate, process data return Response.json({ success: true }); } // app/form/page.js \u0026#39;use client\u0026#39;; import { useState } from \u0026#39;react\u0026#39;; export default function FormPage() { const [loading, setLoading] = useState(false); async function handleSubmit(e) { e.preventDefault(); setLoading(true); const formData = new FormData(e.target); try { const response = await fetch(\u0026#39;/api/submit-form\u0026#39;, { method: \u0026#39;POST\u0026#39;, body: JSON.stringify(Object.fromEntries(formData)), headers: { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, }, }); // handle response } catch (error) { // handle error } finally { setLoading(false); } } return ( \u0026lt;form onSubmit={handleSubmit}\u0026gt; {/* form fields */} \u0026lt;/form\u0026gt; ); } And here\u0026rsquo;s a similar implementation using Server Actions:\n// app/form/page.js \u0026#39;use client\u0026#39;; import { useState } from \u0026#39;react\u0026#39;; import { submitForm } from \u0026#39;./actions\u0026#39;; export default function FormPage() { const [loading, setLoading] = useState(false); async function handleSubmit(formData) { setLoading(true); try { await submitForm(formData); // handle success } catch (error) { // handle error } finally { setLoading(false); } } return ( \u0026lt;form action={handleSubmit}\u0026gt; {/* form fields */} \u0026lt;/form\u0026gt; ); } // app/form/actions.js \u0026#39;use server\u0026#39;; export async function submitForm(formData) { // validate, process data return { success: true }; } For me, the Server Actions approach feels more straightforward for simple forms, though I\u0026rsquo;m still getting used to the pattern.\n2. Type Safety Seems Smoother When using TypeScript, I\u0026rsquo;ve found that Server Actions provide a nice developer experience with type safety. The types defined in your server action are available to your client components without extra work, which has helped me catch a few mistakes early.\n3. Integration with Experimental React Form Hooks One interesting aspect of Server Actions is how they work with React\u0026rsquo;s experimental form hooks like useFormStatus and useFormState. These hooks are still in canary releases as of now, but they show promise for simpler form handling.\nAs Dan Abramov noted in a tweet from March 2023, these hooks are helpful for basic forms, but \u0026ldquo;for very dynamic forms you end up grabbing for more very quickly\u0026rdquo;. I\u0026rsquo;ve found this to be true in my limited experimentation so far.\nCache Invalidation Is Simple One huge benefit I\u0026rsquo;ve discovered with Server Actions is their integration with Next.js\u0026rsquo;s caching system. When you mutate data through a Server Action, you can immediately revalidate the associated cache using built-in APIs like revalidatePath and revalidateTag.\nFor example:\n\u0026#39;use server\u0026#39;; import { revalidatePath } from \u0026#39;next/cache\u0026#39;; export async function updatePost(id, data) { await db.posts.update({ where: { id }, data }); // Revalidate the post page and any related pages revalidatePath(`/posts/${id}`); revalidatePath(\u0026#39;/posts\u0026#39;); return { success: true }; } This tight integration makes it much easier to keep your UI in sync with your data compared to the manual cache management often needed with API Routes.\nWhere API Routes Still Make Sense To Me 1. External Access Requirements For instance, projects that started simple but later needed to add a mobile app. Having proper API Routes from the beginning made this addition much easier.\n2. Team Collaboration Considerations On projects where I\u0026rsquo;m collaborating with others, I\u0026rsquo;ve noticed that API Routes sometimes make it easier to divide responsibilities. The separation between frontend and backend code becomes more explicit, which can help avoid stepping on each other\u0026rsquo;s toes.\n3. Familiar Patterns I have to admit that part of my continued use of API Routes comes down to familiarity. The concepts of status codes, HTTP methods, and headers are patterns I\u0026rsquo;ve used for years, and they still feel like the \u0026ldquo;right way\u0026rdquo; for certain problems.\nDecision These Days When Server Actions Make Sense in the Workflow Last month, I was working on a personal dashboard application with several forms for managing user settings and data entry. Rather than bouncing between API route files and component files, I found myself naturally gravitating toward Server Actions.\nThe workflow felt cohesive - I\u0026rsquo;d write a React component with a form, then add the Server Action in the same directory to handle the form submission. Since all these forms were only accessed through the Next.js app itself (and not by external services), Server Actions felt like the perfect fit.\nOne specific example was a comment system for a blog. I implemented a Server Action that could create, update, and delete comments while automatically triggering cache invalidation for the affected pages. The simplicity was striking - the entire feature took half the boilerplate it would have normally, with no need to manually build API endpoints, handle CSRF protection, or write fetch logic with error states.\nWhen API Routes Better Suit My Projects Contrast this with another project I worked on recently - a CMS that needed to feed data to both a web application and a separate mobile app. Here, API Routes were the obvious choice from day one.\nWe needed consistent endpoints that both platforms could consume, with careful versioning and detailed documentation. I set up a structured API with proper resource-based routes, query parameter support, and consistent error responses. These endpoints became the contract between our platforms, and having them explicitly defined as API Routes made the integration much clearer for everyone involved.\nAnother situation where I still prefer API Routes is when working on projects with more distinct frontend and backend teams. On a recent client project, we had dedicated backend developers focused on database optimization and business logic, while frontend specialists handled the React components and user interactions. By maintaining clear API boundaries, each team could work at their own pace with minimal friction. The backend team could refactor their implementation details without worrying about breaking the frontend, as long as the API contract remained stable.\nAdditionally, when dealing with complex authentication schemes or third-party API integrations, I find that API Routes give me the flexibility I need. On a recent project involving OAuth flows and webhook handlers, the ability to have fine-grained control over headers, status codes, and middleware was invaluable.\nPerformance Thoughts From what I can tell, Server Actions might have a slight performance edge in theory, but in my real-world applications, the difference hasn\u0026rsquo;t been significant. The database queries or external API calls typically dominate the response time either way.\nSecurity Considerations Both approaches seem secure when implemented properly, though they have different security models:\nServer Actions have some nice built-in protections against CSRF attacks, which is one less thing to worry about. This is because Next.js handles the security underneath by generating and validating tokens automatically.\nAPI Routes give me more explicit control over authentication and authorization, which I sometimes prefer for sensitive operations.\nWhere I\u0026rsquo;ve Landed For Now I\u0026rsquo;m finding that Server Actions shine for simpler, UI-driven operations, while API Routes still feel right for more complex scenarios or when I need broader accessibility.\nThis is definitely still evolving for my projects, and I expect the approach will continue to change as these technologies mature and as I work on more projects with different requirements.\n","permalink":"/posts/2024-01-19-server-actions-api-routes-journey//posts/2024-01-19-server-actions-api-routes-journey/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eWhen building full-stack applications with Next.js, one of the architectural decisions we\u0026rsquo;ve been grappling with lately is choosing between Server Actions and API Routes for server-side logic. With Server Actions becoming stable in Next.js 14, I\u0026rsquo;ve been experimenting with both approaches across different projects. I thought I\u0026rsquo;d share some observations from my recent experiences. Your mileage may vary, and I\u0026rsquo;d love to learn on how others are approaching this decision!\u003c/p\u003e","title":"Server Actions vs API Routes in Next.js: My Journey with Both Approaches"},{"content":"The Evolving Nature of Web Vitals For web perf professionals, ones become familiar with Google\u0026rsquo;s Core Web Vitals. These metrics have fundamentally changed how we approach performance optimization, shifting focus from technical measurements to metrics that better reflect real user experiences. As real-world usage patterns evolve and browser capabilities advance, these metrics need to evolve too.\nThat\u0026rsquo;s the role Interaction to Next Paint (INP) plays. Announced earlier this year, INP is Google\u0026rsquo;s experimental metric designed to better measure a page\u0026rsquo;s overall responsiveness to user interactions. More importantly, it\u0026rsquo;s positioned to replace First Input Delay (FID) as a Core Web Vital next year, so understanding it now will give you a significant head start.\nHaving recently migrated several client projects to prioritize INP optimization, I\u0026rsquo;ve seen firsthand how this shift requires rethinking some of our established performance patterns.\nCurrent Core Web Vitals Let\u0026rsquo;s briefly review the current Core Web Vitals trio:\nLargest Contentful Paint (LCP) measures loading performance by timing when the largest content element becomes visible. Good experiences should have an LCP of 2.5 seconds or less.\nFirst Input Delay (FID) measures interactivity by timing the delay between a user\u0026rsquo;s first interaction and the browser\u0026rsquo;s response. Good experiences should have an FID of 100ms or less.\nCumulative Layout Shift (CLS) measures visual stability by quantifying unexpected layout shifts. Good experiences should have a CLS score of 0.1 or less.\nThese metrics have served us well, but FID in particular has shown limitations that the new INP metric aims to address.\nWhy FID Isn\u0026rsquo;t Good Enough While working on a complex e-commerce project last month, our team discovered a frustrating issue: despite excellent FID scores, customers were still complaining about laggy interactions when using the product filtering system.\nThis highlights FID\u0026rsquo;s primary limitation: it only measures the delay before processing begins on the first interaction. It doesn\u0026rsquo;t account for:\nThe processing time of the interaction itself How long it takes for visual feedback to appear Any interactions beyond the first one In real-world usage, users care about the complete responsiveness of every interaction, not just the delay before the first one. This is precisely why Google has been developing INP.\nUnderstanding Interaction to Next Paint (INP) INP measures the latency of all interactions throughout a page\u0026rsquo;s lifecycle and reports a single value that represents the worst interaction experienced by the user. Unlike FID, INP encompasses the full duration from when a user interacts with the page until the next frame is painted on the screen.\nAn interaction typically consists of three parts:\nInput delay: Time from the interaction to when event handlers begin processing Processing time: Time it takes to run all event handlers Presentation delay: Time from completing event handlers to the next frame being rendered INP sums these components to provide a more comprehensive picture of responsiveness. The current thresholds suggest that a good INP score should be 200ms or less, while anything above 500ms is considered poor.\nHow INP Is Measured in the Wild Currently, INP data is being gathered through Chrome User Experience Report and is available in tools like PageSpeed Insights. For real-time measurement during development, you can use:\nChrome DevTools\u0026rsquo; Performance panel The web-vitals JavaScript library The Chrome User Experience Report PageSpeed Insights API To start monitoring INP with the web-vitals library, the bare-bones:\nimport {onINP} from \u0026#39;web-vitals\u0026#39;; // Measure and log INP onINP(({value}) =\u0026gt; { console.log(`INP: ${value}`); }); Chrome\u0026rsquo;s DevTools now also shows INP in the Performance panel, allowing you to identify exactly which interactions are causing issues and what\u0026rsquo;s happening during those interactions.\nOptimizing for INP: Practical Strategies After several weeks of focusing on INP optimization across different projects, we\u0026rsquo;ve identified some key strategies that consistently help improve scores:\n1. Break Up Long Tasks Long-running JavaScript tasks are the most common culprits behind poor INP scores. Break up tasks over 50ms by using techniques like:\n// Instead of this function processLargeDataSet(items) { items.forEach(item =\u0026gt; expensiveOperation(item)); } // Do this function processLargeDataSet(items) { let i = 0; function processChunk() { const end = Math.min(i + 10, items.length); for (; i \u0026lt; end; i++) { expensiveOperation(items[i]); } if (i \u0026lt; items.length) { setTimeout(processChunk, 0); } } processChunk(); } 2. Optimize Event Handlers Keep your event handlers lean. Move complex logic off the main thread when possible:\n// Instead of this button.addEventListener(\u0026#39;click\u0026#39;, () =\u0026gt; { // Complex data processing const result = complexDataProcessing(); updateUI(result); }); // Do this button.addEventListener(\u0026#39;click\u0026#39;, () =\u0026gt; { // Visual feedback immediately showLoadingIndicator(); // Move heavy processing to a Web Worker or use requestIdleCallback setTimeout(() =\u0026gt; { const result = complexDataProcessing(); updateUI(result); hideLoadingIndicator(); }, 0); }); 3. Prioritize Rendering Updates Use requestAnimationFrame to coordinate visual updates:\nbutton.addEventListener(\u0026#39;click\u0026#39;, () =\u0026gt; { // Do calculations const newValues = calculateNewValues(); // Schedule rendering at the optimal time requestAnimationFrame(() =\u0026gt; { updateDOMWithNewValues(newValues); }); }); 4. Implement Interaction Debouncing For interactions like scrolling or typing that can fire frequently, implement debouncing:\nfunction debounce(func, wait) { let timeout; return function(...args) { clearTimeout(timeout); timeout = setTimeout(() =\u0026gt; func.apply(this, args), wait); }; } const debouncedHandler = debounce(() =\u0026gt; { // Handle interaction performExpensiveOperation(); }, 150); element.addEventListener(\u0026#39;input\u0026#39;, debouncedHandler); 5. Use CSS Instead of JavaScript When Possible Offload animations and transitions to CSS, which is handled by the browser\u0026rsquo;s compositor thread:\n.button { transition: transform 0.2s ease; } .button:active { transform: scale(0.95); } Real-world Results: A Case Study Recently, I worked on optimizing a content-heavy site that had good LCP and CLS scores but struggled with interactions. The site\u0026rsquo;s product filtering system used complex JavaScript that ran mostly on the main thread.\nWe implemented the following changes:\nMoved data filtering logic to a Web Worker Added immediate visual feedback when filters were clicked Implemented progressive loading of results Cached filter results for common combinations The results were dramatic:\nINP improved from 652ms to 189ms User engagement with filters increased by 27% Cart abandonment decreased by 15% These improvements weren\u0026rsquo;t captured by FID at all, which remained relatively unchanged (and already good). This highlights why INP is such an important evolution in measuring real user experience.\nPreparing for INP\u0026rsquo;s Future as a Core Web Vital According to Google\u0026rsquo;s announcements, INP is expected to replace FID as a Core Web Vital sometime in 2024. This means it will become one of the metrics used in Google\u0026rsquo;s page experience ranking signals.\nTo prepare your sites:\nStart monitoring INP now using available tools Identify your worst-performing interactions Implement the optimization strategies outlined above Set up continuous monitoring to track improvements and regressions The good news is that optimizing for INP generally improves the overall user experience, regardless of its status as a metric. Users appreciate responsive interfaces, so these optimizations directly benefit your users while also preparing you for the upcoming metric change.\n","permalink":"/posts/2023-10-11-interaction-to-next-paint//posts/2023-10-11-interaction-to-next-paint/","summary":"\u003ch2 id=\"the-evolving-nature-of-web-vitals\"\u003eThe Evolving Nature of Web Vitals\u003c/h2\u003e\n\u003cp\u003eFor web perf professionals, ones become familiar with Google\u0026rsquo;s Core Web Vitals. These metrics have fundamentally changed how we approach performance optimization, shifting focus from technical measurements to metrics that better reflect real user experiences. As real-world usage patterns evolve and browser capabilities advance, these metrics need to evolve too.\u003c/p\u003e\n\u003cp\u003eThat\u0026rsquo;s the role Interaction to Next Paint (INP) plays. Announced earlier this year, INP is Google\u0026rsquo;s experimental metric designed to better measure a page\u0026rsquo;s overall responsiveness to user interactions. More importantly, it\u0026rsquo;s positioned to replace First Input Delay (FID) as a Core Web Vital next year, so understanding it now will give you a significant head start.\u003c/p\u003e","title":"Embracing Interaction to Next Paint (INP) for Better Web Responsiveness"},{"content":"Saw an interesting question pop up in a Discord server the other day that reminded me of a classic JavaScript head-scratcher: what really happens when you use flow control statements like return or throw inside a finally block? Most of us use try...catch...finally regularly, especially finally for crucial cleanup tasks: closing file handles, releasing network connections, resetting state, you name it. Its guarantee to run, whether the try block succeeds, fails (throw), or exits early (return), is fundamental.\nBut things get weird when finally itself tries to dictate the flow.\nThe Purpose of finally The primary job of the finally block is cleanup. Consider this typical scenario:\nfunction processData() { let resource = acquireResource(); // Might throw try { // Do work with the resource let result = resource.doSomethingCritical(); // Might throw if (result.needsEarlyExit) { console.log(\u0026#34;Exiting early based on result.\u0026#34;); // Cleanup still needs to happen! return { status: \u0026#39;partial\u0026#39;, data: result.partialData }; } console.log(\u0026#34;Processing completed normally.\u0026#34;); return { status: \u0026#39;ok\u0026#39;, data: result.fullData }; // Normal exit } catch (error) { console.error(\u0026#34;An error occurred during processing:\u0026#34;, error); // Maybe re-throw, maybe return an error status // Cleanup still needs to happen! throw new Error(\u0026#34;Failed to process data\u0026#34;, { cause: error }); } finally { console.log(\u0026#34;Cleaning up the resource.\u0026#34;); resource.release(); // \u0026lt;-- The essential cleanup step } } // Example usage try { const outcome = processData(); console.log(\u0026#34;Outcome:\u0026#34;, outcome); } catch (e) { console.error(\u0026#34;Caught top-level error:\u0026#34;, e.message); } No matter how the try block finishes: normal completion, return, or throw (caught by catch or propagating outwards), the finally block executes resource.release(). Predictable, reliable. That\u0026rsquo;s what we want.\nThe Completion Record So, why does adding flow control inside finally cause trouble? The answer lies in an internal mechanism defined by the ECMAScript specification: the Completion Record. You don\u0026rsquo;t interact with it directly in code, but understanding the concept clarifies a lot of JavaScript\u0026rsquo;s execution behavior.\nThink of a Completion Record as a small, internal \u0026ldquo;status report\u0026rdquo; generated whenever a block of code finishes executing. It essentially contains:\nType: How did it finish? (normal, return, throw, break, continue) Value: If it was a return or throw, what value was returned or thrown? (Undefined otherwise for normal). Target: (Relevant for break/continue in loops/labels, less critical here). When a try block executes, it produces a Completion Record. If an error occurs and there\u0026rsquo;s a catch, the catch block executes and it produces a Completion Record.\nNow, here comes the finally block. It always runs after try (and catch, if triggered). The crucial part is this:\nfinally executes. It produces its own Completion Record. If the finally block\u0026rsquo;s Completion Record type is normal, then the original Completion Record from the try (or catch) block is the one that determines what happens next (e.g., the function returns the value from try, or the error from catch continues propagating). However, if the finally block\u0026rsquo;s Completion Record type is not normal (i.e., it contains a return, throw, break, or continue), then this new Completion Record from finally overrides the original one. The Gotchas in Practice Let\u0026rsquo;s see what that overriding behavior looks like.\nGotcha 1: return inside finally swallows everything.\nfunction testReturnInFinally() { try { console.log(\u0026#34;Try block starts.\u0026#34;); //return \u0026#34;Value from try\u0026#34;; // This return gets ignored! throw new Error(\u0026#34;Error from try\u0026#34;); // This throw gets ignored too! } catch (e) { console.error(\u0026#34;Caught error:\u0026#34;, e.message); return \u0026#34;Value from catch\u0026#34;; // Even this return gets ignored! } finally { console.log(\u0026#34;Finally block starts.\u0026#34;); return \u0026#34;Value from finally\u0026#34;; // \u0026lt;-- This takes precedence! } // This line is unreachable console.log(\u0026#34;After try...catch...finally\u0026#34;); } console.log(\u0026#34;Result:\u0026#34;, testReturnInFinally()); // Output: // Try block starts. // Caught error: Error from try // Finally block starts. // Result: Value from finally Notice how the return in finally completely replaced the intended throw from try (which was caught) and even the return from catch. The function simply returned \u0026quot;Value from finally\u0026quot;. If the try block had completed normally with a return, that too would have been overridden. This can silently mask errors or lead to completely unexpected return values.\nGotcha 2: throw inside finally masks original errors/returns.\nfunction testThrowInFinally() { try { console.log(\u0026#34;Try block starts.\u0026#34;); return \u0026#34;Value from try\u0026#34;; // This return gets ignored! // throw new Error(\u0026#34;Original error from try\u0026#34;); // This original error also gets masked } finally { console.log(\u0026#34;Finally block starts.\u0026#34;); throw new Error(\u0026#34;Error from finally\u0026#34;); // \u0026lt;-- This error takes precedence! } } try { testThrowInFinally(); } catch (e) { console.error(\u0026#34;Caught exception:\u0026#34;, e.message); } // Output: // Try block starts. // Finally block starts. // Caught exception: Error from finally Here, the throw in finally discards the return value from the try block. If the try block had thrown its own error, that original error would be lost, replaced by the one thrown from finally. This makes debugging a nightmare, the error you see isn\u0026rsquo;t the root cause\u0026hellip;\nGotcha 3: break or continue inside finally (within a loop).\nThis is less common, but behaves similarly:\nfor (let i = 0; i \u0026lt; 3; i++) { try { console.log(`Try block, i = ${i}`); if (i === 1) { throw new Error(\u0026#34;Problem at i=1\u0026#34;); } } finally { console.log(`Finally block, i = ${i}`); if (i === 1) { console.log(\u0026#34;Breaking from finally!\u0026#34;); break; // \u0026lt;-- Abrupt completion from finally } } console.log(`After finally, i = ${i}`); // This won\u0026#39;t run for i = 1 } // Output: // Try block, i = 0 // Finally block, i = 0 // After finally, i = 0 // Try block, i = 1 // Finally block, i = 1 // Breaking from finally! The break in finally overrides the throw that would have otherwise occurred (or the normal completion if no error happened) and terminates the loop immediately.\nWhy Avoid Flow Control in finally? The key takeaway is predictability. Code that relies on return, throw, break, or continue within a finally block is often:\nHard to Read: It obscures the actual exit point and outcome of the try...catch structure. Error-Prone: It can silently swallow errors or intended return values. Difficult to Debug: The observed behavior (return value or thrown error) might not originate from where you expect. Stick to using finally for its intended purpose: cleanup. If you need to alter control flow based on success or failure, do it within the try or catch blocks before finally runs.\nA Glimpse of the Future: Explicit Resource Management (using TC39 Stage 3) This discussion about reliable cleanup ties nicely into a feature that\u0026rsquo;s been maturing in TC39: Explicit Resource Management, primarily through the using and await using declarations. It\u0026rsquo;s currently a Stage 3 proposal, meaning it\u0026rsquo;s stable and implementations are appearing in runtimes (like Node.js, Deno, Bun) and browsers, often behind flags, or available via transpilers like Babel and TypeScript.\nThe core idea is to provide a dedicated syntax for resources that need deterministic cleanup. Objects can implement a [Symbol.dispose] (synchronous) or [Symbol.asyncDispose] (asynchronous) method.\n// Hypothetical resource class class MyResource { constructor() { console.log(\u0026#34;Resource acquired\u0026#34;); this.closed = false; } doWork() { if (this.closed) throw new Error(\u0026#34;Resource is closed\u0026#34;); console.log(\u0026#34;Doing work...\u0026#34;); // Simulating potential failure if (Math.random() \u0026gt; 0.5) throw new Error(\u0026#34;Work failed!\u0026#34;); } // Dispose method for \u0026#39;using\u0026#39; [Symbol.dispose]() { console.log(\u0026#34;Resource disposing (sync)\u0026#34;); this.closed = true; } } // Usage with \u0026#39;using\u0026#39; function processWithUsing() { try { // \u0026#39;resource\u0026#39; will be disposed automatically when leaving the block using resource = new MyResource(); resource.doWork(); console.log(\u0026#34;Work succeeded!\u0026#34;); return \u0026#34;Success\u0026#34;; } catch (e) { console.error(\u0026#34;Caught error in processWithUsing:\u0026#34;, e.message); return \u0026#34;Failure\u0026#34;; // Resource still gets disposed! } // No \u0026#39;finally\u0026#39; needed for resource disposal! } console.log(\u0026#34;Final result:\u0026#34;, processWithUsing()); // Possible Output 1 (Success): // Resource acquired // Doing work... // Work succeeded! // Resource disposing (sync) // Final result: Success // Possible Output 2 (Failure): // Resource acquired // Doing work... // Caught error in processWithUsing: Work failed! // Resource disposing (sync) // Final result: Failure The using declaration automatically calls [Symbol.dispose]() when the block scope is exited, regardless of whether it\u0026rsquo;s via normal completion, return, or throw. await using does the same for [Symbol.asyncDispose]() in async contexts.\nWhile finally remains essential for general-purpose cleanup actions that aren\u0026rsquo;t tied to a specific object\u0026rsquo;s lifecycle, using offers a more structured and less error-prone way to handle resource management specifically. It directly addresses the common use case where finally is currently employed, often making the code cleaner and avoiding the temptation to put complex logic (or worse, flow control) inside finally. Definitely something to keep an eye on and start experimenting with where available.\n","permalink":"/posts/2023-09-08-javascript-finally-using//posts/2023-09-08-javascript-finally-using/","summary":"\u003cp\u003eSaw an interesting question pop up in a Discord server the other day that reminded me of a classic JavaScript head-scratcher: what \u003cem\u003ereally\u003c/em\u003e happens when you use flow control statements like \u003ccode\u003ereturn\u003c/code\u003e or \u003ccode\u003ethrow\u003c/code\u003e inside a \u003ccode\u003efinally\u003c/code\u003e block? Most of us use \u003ccode\u003etry...catch...finally\u003c/code\u003e regularly, especially \u003ccode\u003efinally\u003c/code\u003e for crucial cleanup tasks: closing file handles, releasing network connections, resetting state, you name it. Its guarantee to run, whether the \u003ccode\u003etry\u003c/code\u003e block succeeds, fails (\u003ccode\u003ethrow\u003c/code\u003e), or exits early (\u003ccode\u003ereturn\u003c/code\u003e), is fundamental.\u003c/p\u003e","title":"Digging into JavaScript's `finally`: Completion Records, Flow Control Pitfalls, and the Road to `using`"},{"content":"What are import maps? ES modules with direct URLs have been around for a while, allowing us to write code like:\nimport { something } from \u0026#39;https://cdn.example.com/packages/module/v1.2.3/index.js\u0026#39;; Import maps provide a level of indirection between module specifiers and the actual URL where the module resides. They\u0026rsquo;re a simple JSON structure that maps from import specifiers to URLs:\n\u0026lt;script type=\u0026#34;importmap\u0026#34;\u0026gt; { \u0026#34;imports\u0026#34;: { \u0026#34;react\u0026#34;: \u0026#34;https://esm.sh/react@18.2.0\u0026#34;, \u0026#34;react-dom\u0026#34;: \u0026#34;https://esm.sh/react-dom@18.2.0\u0026#34;, \u0026#34;lib/\u0026#34;: \u0026#34;https://cdn.skypack.dev/lib@1.2.3/\u0026#34; } } \u0026lt;/script\u0026gt; With this map in place, our application code can now use clean, maintainable imports:\nimport React from \u0026#39;react\u0026#39;; import ReactDOM from \u0026#39;react-dom\u0026#39;; import { helper } from \u0026#39;lib/helper.js\u0026#39;; The browser uses the import map to resolve these imports to their actual URLs at runtime. No bundler required!\nWhy this matters now Import maps aren\u0026rsquo;t new, and thye\u0026rsquo;ve been in the works for years. What\u0026rsquo;s changed is that they\u0026rsquo;re finally supported across all major browsers. Chrome added support back in version 89, Firefox implemented them in version 108, and Safari finally joined the party in version 16.4, released earlier this year.\nThis cross-browser support means we can start using import maps in production without fallbacks or polyfills.\nThe current state of JavaScript module management The reality is, despite the browser support for ES modules, the vast majority of web projects today still rely on bundlers like webpack, Rollup, Vite, esbuild, or Parcel. This is for good reasons:\nPerformance optimization through bundling and minification Transpilation for wider browser support Tree-shaking to eliminate unused code Support for non-JavaScript assets like CSS, images, and more Development conveniences like hot module replacement Most developers aren\u0026rsquo;t writing direct ES module imports in production code. Instead, we use package managers like npm or yarn, and let bundlers handle the module resolution and optimization.\nSo where do import maps fit in this ecosystem? They\u0026rsquo;re not a replacement for bundlers in most large-scale applications, but they do open up interesting possibilities for specific use cases.\nThe strength of import maps 1. Versioning and dependency management With import maps, updating a dependency becomes a one-line change:\n- \u0026#34;react\u0026#34;: \u0026#34;https://esm.sh/react@18.1.0\u0026#34; + \u0026#34;react\u0026#34;: \u0026#34;https://esm.sh/react@18.2.0\u0026#34; Every module that imports \u0026lsquo;react\u0026rsquo; will now use the new version. No need to search and replace across your codebase.\n2. Progressive enhancement without bundling Import maps enable sophisticated caching strategies without bundling. This is particularly valuable for progressive enhancement approaches where you want core functionality to work without JavaScript, but enhance the experience when it\u0026rsquo;s available.\nFor example, you can serve the minimal JavaScript needed for core functionality, then use import maps to lazily load enhanced features only when they\u0026rsquo;re needed.\nThis might warrant a brief post experimenting the pattern if this became more prevalent\n3. Development/production switching // Development importmap { \u0026#34;imports\u0026#34;: { \u0026#34;app/\u0026#34;: \u0026#34;/src/\u0026#34;, \u0026#34;react\u0026#34;: \u0026#34;https://esm.sh/react@18.2.0?dev\u0026#34; } } // Production importmap { \u0026#34;imports\u0026#34;: { \u0026#34;app/\u0026#34;: \u0026#34;/dist/\u0026#34;, \u0026#34;react\u0026#34;: \u0026#34;https://esm.sh/react@18.2.0\u0026#34; } } 4. Reducing the \u0026ldquo;hidden magic\u0026rdquo; in projects Modern JavaScript development often involves a lot of \u0026ldquo;magic\u0026rdquo; happening under the hood with imports. Tools like webpack, Vite, and others transform imports in ways that are invisible in the source code.\nImport maps make the mapping explicitand you can see exactly what URL a module resolves to by checking the import map.\nUse cases Here are some patterns where import maps offer unique advantages:\nMicro-frontends without the complexity Import maps are ideal for micro-frontend architectures. Each team can develop their components independently, and the shell application can use an import map to stitch everything together.\n\u0026lt;script type=\u0026#34;importmap\u0026#34;\u0026gt; { \u0026#34;imports\u0026#34;: { \u0026#34;shell/\u0026#34;: \u0026#34;/shell/\u0026#34;, \u0026#34;team-a/\u0026#34;: \u0026#34;https://team-a-app.example.com/modules/\u0026#34;, \u0026#34;team-b/\u0026#34;: \u0026#34;https://team-b-app.example.com/modules/\u0026#34; } } \u0026lt;/script\u0026gt; Dependency sharing in multi-page applications For multi-page applications, import maps ensure consistent dependency versions across pages without bundling everything together:\n\u0026lt;!-- shared-deps.html (included in every page) --\u0026gt; \u0026lt;script type=\u0026#34;importmap\u0026#34;\u0026gt; { \u0026#34;imports\u0026#34;: { \u0026#34;react\u0026#34;: \u0026#34;https://esm.sh/react@18.2.0\u0026#34;, \u0026#34;react-dom\u0026#34;: \u0026#34;https://esm.sh/react-dom@18.2.0\u0026#34;, \u0026#34;utils/\u0026#34;: \u0026#34;/shared/utils/\u0026#34; } } \u0026lt;/script\u0026gt; This pattern also gives you the benefits of code-splitting without the complexity of a bundler\u0026rsquo;s configuration.\nVersioned deployments When deploying new versions of an application, you can include a version hash in your module paths:\n\u0026lt;script type=\u0026#34;importmap\u0026#34;\u0026gt; { \u0026#34;imports\u0026#34;: { \u0026#34;app/\u0026#34;: \u0026#34;/dist/a7f3bc9/\u0026#34; } } \u0026lt;/script\u0026gt; This ensures clean cache invalidation when deploying changes.\nThe limitations and trade-offs Import maps aren\u0026rsquo;t a silver bullet, and there are valid concerns:\nDependency management complexity: Import maps shift some dependency management responsibility from your build tools to your application code, which can introduce its own complexities.\nPerformance considerations: Unbundled ESM imports mean more HTTP requests, which can impact performance despite HTTP/2 and HTTP/3 improvements. For large applications, bundling still offers performance advantages.\nLimited to browser environments: Import maps are a browser feature, meaning different module resolution strategies between browser and Node.js environments.\nNo tree-shaking: Automatic tree-shaking and dead code elimination are still bundler\u0026rsquo;s league.\nConclusion Import maps represent a significant step toward a more mature module ecosystem in the browser. They bridge the gap between Node.js-style bare imports and URL-based ES modules, providing flexibility without requiring bundlers.\nWhile they won\u0026rsquo;t replace bundlers for most production applications. Particularly large ones, where performance is critical. They offer new possibilities for development workflows, simpler deployment strategies, and more transparent dependency management.\nThe reality is that bundlers will continue to dominate the JavaScript ecosystem for the foreseeable future. However, now that we have cross-browser support for import maps, they become a viable alternative for specific use cases and smaller projects where simplicity is valued over optimization.\nFrom building a small site that could benefit from the simplicity of direct ES modules, or exploring micro-frontend architectures, or just wanting to better understand the module ecosystem. Import maps give developers more options, and that\u0026rsquo;s always a good thing. They allow us to be more intentional about when and how we use bundlers, rather than reaching for them by default for every project.\n","permalink":"/posts/2023-05-18-import-maps-missing-piece//posts/2023-05-18-import-maps-missing-piece/","summary":"\u003ch2 id=\"what-are-import-maps\"\u003eWhat are import maps?\u003c/h2\u003e\n\u003cp\u003eES modules with direct URLs have been around for a while, allowing us to write code like:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-javascript\" data-lang=\"javascript\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"kr\"\u003eimport\u003c/span\u003e \u003cspan class=\"p\"\u003e{\u003c/span\u003e \u003cspan class=\"nx\"\u003esomething\u003c/span\u003e \u003cspan class=\"p\"\u003e}\u003c/span\u003e \u003cspan class=\"nx\"\u003efrom\u003c/span\u003e \u003cspan class=\"s1\"\u003e\u0026#39;https://cdn.example.com/packages/module/v1.2.3/index.js\u0026#39;\u003c/span\u003e\u003cspan class=\"p\"\u003e;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eImport maps provide a level of indirection between module specifiers and the actual URL where the module resides. They\u0026rsquo;re a simple JSON structure that maps from import specifiers to URLs:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-html\" data-lang=\"html\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e\u0026lt;\u003c/span\u003e\u003cspan class=\"nt\"\u003escript\u003c/span\u003e \u003cspan class=\"na\"\u003etype\u003c/span\u003e\u003cspan class=\"o\"\u003e=\u003c/span\u003e\u003cspan class=\"s\"\u003e\u0026#34;importmap\u0026#34;\u003c/span\u003e\u003cspan class=\"p\"\u003e\u0026gt;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e{\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"s2\"\u003e\u0026#34;imports\u0026#34;\u003c/span\u003e\u003cspan class=\"o\"\u003e:\u003c/span\u003e \u003cspan class=\"p\"\u003e{\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"s2\"\u003e\u0026#34;react\u0026#34;\u003c/span\u003e\u003cspan class=\"o\"\u003e:\u003c/span\u003e \u003cspan class=\"s2\"\u003e\u0026#34;https://esm.sh/react@18.2.0\u0026#34;\u003c/span\u003e\u003cspan class=\"p\"\u003e,\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"s2\"\u003e\u0026#34;react-dom\u0026#34;\u003c/span\u003e\u003cspan class=\"o\"\u003e:\u003c/span\u003e \u003cspan class=\"s2\"\u003e\u0026#34;https://esm.sh/react-dom@18.2.0\u0026#34;\u003c/span\u003e\u003cspan class=\"p\"\u003e,\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"s2\"\u003e\u0026#34;lib/\u0026#34;\u003c/span\u003e\u003cspan class=\"o\"\u003e:\u003c/span\u003e \u003cspan class=\"s2\"\u003e\u0026#34;https://cdn.skypack.dev/lib@1.2.3/\u0026#34;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"p\"\u003e}\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e}\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e\u0026lt;/\u003c/span\u003e\u003cspan class=\"nt\"\u003escript\u003c/span\u003e\u003cspan class=\"p\"\u003e\u0026gt;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eWith this map in place, our application code can now use clean, maintainable imports:\u003c/p\u003e","title":"Import Maps: The Missing Piece in JavaScript's Module Ecosystem"},{"content":"We\u0026rsquo;ve been wrestling with a performance paradox: we want to build rich, interactive web applications, but the process of getting them into the browser has become increasingly expensive. Hydration has slowly become one of our industry\u0026rsquo;s biggest performance bottlenecks.\nI recently spent a weekend exploring this experimental framework, and I\u0026rsquo;m genuinely intrigued by its approach. While it\u0026rsquo;s definitely not production-ready yet, Qwik represents a fundamental rethinking of how we deliver JavaScript to the browser.\nThe Hydration Problem If you\u0026rsquo;ve worked with React, Vue, or any modern framework that supports server-side rendering (SSR), you\u0026rsquo;re familiar with hydration. The server sends HTML to the browser for fast initial rendering, then ships a bundle of JavaScript that \u0026ldquo;rehydrates\u0026rdquo; the page, attaching event listeners and making it interactive.\nThe problem? This hydration process requires loading and executing a significant amount of JavaScript upfront - even for parts of the page the user might never interact with. With React 18\u0026rsquo;s improvements earlier this year and frameworks like Next.js and Remix optimizing the developer experience, we\u0026rsquo;ve mitigated some pain points, but the fundamental issue remains: hydration is expensive.\nJust last week, I deployed a mid-sized Next.js application that took nearly 2 seconds to become interactive on mid-range mobile devices, despite all my optimization efforts with code splitting and lazy loading. The hydration tax was still too high.\nEnter Qwik and Resumability Qwik takes a fundamentally different approach. Instead of hydrating, it \u0026ldquo;resumes\u0026rdquo; the application exactly where the server left off.\nThe key insight is that Qwik serializes the application state and event listeners directly into the HTML, allowing the browser to pick up the application without needing to run through the component tree again. The most impressive part? It only loads the JavaScript required for the specific interaction a user initiates.\nHere\u0026rsquo;s what makes Qwik\u0026rsquo;s approach revolutionary:\nNo upfront JavaScript cost - Qwik applications can be fully interactive with virtually zero initial JavaScript.\nFine-grained lazy loading - JavaScript loads only for the components that need to respond to user interactions, and only when those interactions occur.\nPreserving application state - The framework serializes component state directly into HTML, eliminating the need to rebuild it on the client.\nHow Resumability Works in Real-World UI Let\u0026rsquo;s move beyond toy counters and todo lists to see how Qwik handles real-world UI components. Performance promises are easy with simple examples, but to truly appreciate Qwik\u0026rsquo;s approach, we need to look at more involved UI scenarios that reflect what actual frontends do.\nScenario 1: Search Suggestions with Lazy Filtering Imagine a search input that displays suggestions by filtering a dataset fetched server-side:\nimport { component$, useStore, $ } from \u0026#39;@builder.io/qwik\u0026#39;; export const SearchWithSuggestions = component$(() =\u0026gt; { const state = useStore({ query: \u0026#39;\u0026#39;, results: [] }); const onSearchInput = $((evt: KeyboardEvent) =\u0026gt; { const value = (evt.target as HTMLInputElement).value; state.query = value; // simulate filtering large server-fetched list state.results = mockDataset.filter(item =\u0026gt; item.toLowerCase().includes(value.toLowerCase())); }); return ( \u0026lt;div\u0026gt; \u0026lt;input placeholder=\u0026#34;Type to search\u0026#34; value={state.query} onInput$={onSearchInput} /\u0026gt; {state.results.length \u0026gt; 0 \u0026amp;\u0026amp; ( \u0026lt;ul\u0026gt; {state.results.map(item =\u0026gt; ( \u0026lt;li key={item}\u0026gt;{item}\u0026lt;/li\u0026gt; ))} \u0026lt;/ul\u0026gt; )} \u0026lt;/div\u0026gt; ); }); const mockDataset = [\u0026#39;Qwik\u0026#39;, \u0026#39;React\u0026#39;, \u0026#39;Vue\u0026#39;, \u0026#39;Svelte\u0026#39;, \u0026#39;Solid\u0026#39;, \u0026#39;Preact\u0026#39;]; In popular frameworks, the entire filtering logic and suggestion dropdown code gets bundled and hydrated immediately, affecting Time-to-Interactive even if the user never searches for anything. With Qwik, the input remains inert until someone types. On the first keystroke, only the filtering chunk loads, fetched lazily. No global hydration is needed.\nThe key is the onInput$ event handler, which is serialized into HTML and fetched lazily when the user first types. The static HTML already contains the serialized empty search state, and no global filtering logic loads initially.\nScenario 2: Modal Loaded and Activated On Demand Modals often contain heavy UI elements like forms, tabs, and validators. In traditional frameworks, hydration demands all that JavaScript upfront. Qwik delays activation until the user actually triggers the modal:\nimport { component$, useStore, $ } from \u0026#39;@builder.io/qwik\u0026#39;; export const LazyModalExample = component$(() =\u0026gt; { const state = useStore({ showModal: false, formData: { name: \u0026#39;\u0026#39; } }); const openModal = $(() =\u0026gt; state.showModal = true); const closeModal = $(() =\u0026gt; state.showModal = false); const submitForm = $(() =\u0026gt; { console.log(\u0026#39;Submitted:\u0026#39;, state.formData); state.showModal = false; }); return ( \u0026lt;div\u0026gt; \u0026lt;button onClick$={openModal}\u0026gt;Open Modal\u0026lt;/button\u0026gt; {state.showModal \u0026amp;\u0026amp; ( \u0026lt;div class=\u0026#34;modal-overlay\u0026#34; onClick$={closeModal}\u0026gt; \u0026lt;div class=\u0026#34;modal-content\u0026#34; onClick$={(e) =\u0026gt; e.stopPropagation()}\u0026gt; \u0026lt;h2\u0026gt;Fill the Form\u0026lt;/h2\u0026gt; \u0026lt;input placeholder=\u0026#34;Name\u0026#34; value={state.formData.name} onInput$={$((e) =\u0026gt; state.formData.name = (e.target as HTMLInputElement).value)} /\u0026gt; \u0026lt;button onClick$={submitForm}\u0026gt;Submit\u0026lt;/button\u0026gt; \u0026lt;button onClick$={closeModal}\u0026gt;Cancel\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; )} \u0026lt;/div\u0026gt; ); }); Notice all event handlers are marked with $. This is Qwik\u0026rsquo;s way of indicating which functions should be split into separate chunks. These functions are automatically split into mini chunks that download only after the user triggers the modal or submits the form. By default, none of this code hits the initial bundle. Even the console.log() won\u0026rsquo;t end up on the client until the submit button is clicked.\nHow This Differs From Mainstream Frameworks In both examples, the initial HTML contains serialized reactive state and event boundary markers—but almost zero interactive JavaScript bytes. Only minimal code for the exact interaction arrives on-demand later, unlike frameworks that hydrate the entire route or page regardless of user paths.\nWith traditional frameworks, even with code splitting, these frameworks typically load at least the component-level JavaScript upfront during hydration. Qwik\u0026rsquo;s approach means a user who never opens a modal or never types in a search box will never download the JavaScript for those interactions.\nCurrent Limitations Qwik is still in preview, and there are clear signs of its experimental status.\nIt\u0026rsquo;s also worth noting that Qwik works best when you build with it from the ground up. While there are adapters for using React components inside Qwik, you\u0026rsquo;ll get the best performance by fully embracing its paradigm.\nWhere Qwik Fits in Today\u0026rsquo;s Landscape The frontend framework space is more vibrant than ever. We\u0026rsquo;ve seen Svelte and SolidJS challenge React\u0026rsquo;s rendering paradigm with compilation approaches. Astro has popularized the concept of \u0026ldquo;islands architecture\u0026rdquo; for partially hydrated apps. Next.js and Remix have pushed server-rendering to new heights.\nWhat makes Qwik interesting is that it\u0026rsquo;s not just an incremental improvement - it\u0026rsquo;s questioning one of the fundamental assumptions of modern web development: that hydration is necessary.\nShould You Try It? If you\u0026rsquo;re a developer who enjoys exploring cutting-edge approaches and doesn\u0026rsquo;t mind some rough edges, Qwik is absolutely worth experimenting with. Create a small prototype or rebuild a section of an existing application to understand its paradigm.\nWhile it is not ready for production applications just yet, tracking on Qwik\u0026rsquo;s development could give you valuable insights into where web performance optimization is heading. The concepts behind resumability might influence how you think about code splitting and lazy loading, even in your current framework.\nConclusion Qwik represents one of the interesting attempts to solve the hydration problem I\u0026rsquo;ve seen. By fundamentally rethinking how JavaScript is delivered to the browser, it opens up possibilities for performance optimization that go beyond what traditional frameworks can achieve with incremental improvements.\nWhether Qwik itself becomes mainstream or not, its approach to resumability rather than hydration is aligning to the next generation of web frameworks. As Core Web Vitals continue to impact both user experience and SEO, solutions like this that fundamentally address JavaScript performance will become increasingly valuable.\n","permalink":"/posts/2022-10-03-qwik-preview-javascript-hydration-resumability//posts/2022-10-03-qwik-preview-javascript-hydration-resumability/","summary":"\u003cp\u003eWe\u0026rsquo;ve been wrestling with a performance paradox: we want to build rich, interactive web applications, but the process of getting them into the browser has become increasingly expensive. Hydration has slowly become one of our industry\u0026rsquo;s biggest performance bottlenecks.\u003c/p\u003e\n\u003cp\u003eI recently spent a weekend exploring this experimental framework, and I\u0026rsquo;m genuinely intrigued by its approach. While it\u0026rsquo;s definitely not production-ready yet, Qwik represents a fundamental rethinking of how we deliver JavaScript to the browser.\u003c/p\u003e","title":"Qwik Preview: Rethinking JavaScript Hydration with Resumability"},{"content":"Libraries are shifting to ESM-only, build tools reveal sharp edges, and mixed ecosystems introduce subtle incompatibilities. This post offers a technically grounded snapshot of challenges and emerging patterns during this transitional phase, based on my hands-on experience and community input.\nHow We Got Here JavaScript\u0026rsquo;s module story has been\u0026hellip; complicated. In the early days, all JavaScript code lived in the global scope. Developers relied on naming conventions and immediately invoked function expressions (IIFEs) to prevent collisions. Then Node.js came along with CommonJS, giving us require() and module.exports. Finally, in 2015, ECMAScript 6 introduced native modules with import and export statements.\nFast forward to today, and we\u0026rsquo;re living in a mixed module world. Browsers have standardized on ESM, while Node.js has been gradually improving its ESM support since version 12. The promise is enticing: a unified module system that works everywhere. The reality? Let\u0026rsquo;s just say it\u0026rsquo;s been an adventure.\nThe Dependency Chain of Pain Last month, I was updating dependencies for a client project when everything suddenly broke. A seemingly innocent patch update to a utility library three levels deep in our dependency tree. This library had decided to go \u0026ldquo;ESM-only,\u0026rdquo; which ordinarily wouldn\u0026rsquo;t be a problem—except that one of its consumers was strictly CommonJS, creating an incompatibility that bubbled up to our application.\nThis kind of \u0026ldquo;module impedance mismatch\u0026rdquo; has become increasingly common as more libraries make the switch. Your project might be ready for ESM, but what about your dependencies? And their dependencies? It\u0026rsquo;s turtles all the way down.\nAfter an afternoon of frustration, I ended up forking the problematic dependency to create a dual-package version. Not elegant, but it worked. This pattern of hand-patching dependencies or pinning to older versions has unfortunately become a common workaround in many projects.\nJest: The Testing Ground for Module Patience If you\u0026rsquo;ve tried to use Jest with ESM, you\u0026rsquo;ve probably experienced some pain. Jest itself is written in CommonJS, which creates some interesting challenges when testing ESM code.\nJust last week, I was setting up tests for a new project and spent hours trying to get Jest working with my ESM codebase. The solution involved a complex dance of configuration settings:\n// package.json { \u0026#34;type\u0026#34;: \u0026#34;module\u0026#34;, \u0026#34;jest\u0026#34;: { \u0026#34;transform\u0026#34;: {}, \u0026#34;extensionsToTreatAsEsm\u0026#34;: [\u0026#34;.js\u0026#34;], \u0026#34;moduleNameMapper\u0026#34;: { \u0026#34;^(\\\\.{1,2}/.*)\\\\.js$\u0026#34;: \u0026#34;$1\u0026#34; } } } And that\u0026rsquo;s just the beginning. Once you add TypeScript to the mix, things get even more interesting. I\u0026rsquo;ve had more success with Vitest lately, which was built with ESM support in mind, but migrating existing test suites isn\u0026rsquo;t always practical.\nThe \u0026ldquo;type\u0026rdquo;: \u0026ldquo;module\u0026rdquo; Toggle Switch One of the most powerful yet confusing aspects of the ESM transition is the humble \u0026quot;type\u0026quot;: \u0026quot;module\u0026quot; field in package.json. Add this line, and suddenly all your .js files are interpreted as ES modules. Remove it, and they\u0026rsquo;re back to CommonJS.\nThis leads to a situation I call \u0026ldquo;module whiplash.\u0026rdquo; I\u0026rsquo;ve watched developers (including myself) toggle this setting back and forth as they encounter various compatibility issues, often without fully understanding the implications:\n# A common progression I\u0026#39;ve observed in git histories: - Add \u0026#34;type\u0026#34;: \u0026#34;module\u0026#34; - Fix import paths to include .js extensions - Run into problems with __dirname not being defined - Add workarounds for missing CommonJS globals - Encounter a library that doesn\u0026#39;t work with ESM - Remove \u0026#34;type\u0026#34;: \u0026#34;module\u0026#34; - Fix all the import paths again - Add back \u0026#34;type\u0026#34;: \u0026#34;module\u0026#34; but with more workarounds The funny part is that after all this toggling, many projects end up with a hybrid approach: ESM for application code, with careful boundaries around CommonJS dependencies.\nThe Build Tool Revolution Last year, I wrote about how Go and Rust were bringing unprecedented speed to JavaScript tooling. That trend has continued to reshape how we approach module bundling and transformation. Tools like esbuild (written in Go) have dramatically accelerated build times, making the development experience with ESM more bearable.\nIn my daily work, I\u0026rsquo;ve largely switched from webpack configurations that took minutes to complete to esbuild-powered setups that finish in seconds. This speed has been a game-changer when working with ESM, as the faster feedback loop helps catch module-related issues earlier.\nI\u0026rsquo;ve also been impressed with how SWC has matured over the last year. It\u0026rsquo;s now integrated into Next.js by default, replacing Babel for transpilation. For projects that need to support both ESM and CommonJS, these newer tools provide much more efficient transformation paths.\nHowever, the transition hasn\u0026rsquo;t been entirely smooth. While these new tools excel at speed, they sometimes lack the full feature set of their JavaScript predecessors. I\u0026rsquo;ve occasionally had to maintain parallel configurations—using esbuild for development and webpack for production builds—when requiring advanced optimizations or specific plugins.\nThe Browser-Node Divide What works in browsers doesn\u0026rsquo;t always work in Node.js, and vice versa. This has always been true to some extent, but the module transition has highlighted these differences.\nFor example, in browsers, you can use a bare import like:\nimport { something } from \u0026#39;some-package\u0026#39;; But in Node.js ESM, you\u0026rsquo;ll often see URLs with explicit file extensions:\nimport { something } from \u0026#39;./utils.js\u0026#39;; This inconsistency has led to many tools generating different output for browser versus Node.js environments, adding complexity to build pipelines.\nI recently had to maintain two separate entry points for a library: one for browsers (with path mapping handled by bundlers) and one for Node.js (with explicit file extensions). It works, but it feels like we\u0026rsquo;ve added accidental complexity to what should be a straightforward standard.\nWhat\u0026rsquo;s Working: Practical Patterns Despite the challenges, several patterns have emerged to make the transition more manageable. The dual package approach has gained traction among library authors, allowing packages to work in both ESM and CommonJS environments through clever use of the exports field in package.json. When publishing libraries myself, I\u0026rsquo;ve found that build-time transpilation with tools like esbuild, Rollup, or SWC creates flexibility to write in modern ESM while distributing in multiple formats.\nFor application development, I\u0026rsquo;ve seen a clear divide forming. Projects built with newer tools like Vite that embrace ESM from the ground up tend to have a smoother experience than those retrofitting ESM into webpack-based setups. There\u0026rsquo;s also the file extension strategy—using explicit .mjs for ESM and .cjs for CommonJS files—which sidesteps many configuration headaches at the cost of more verbose filenames. I was initially resistant to this approach, but after repeatedly dealing with configuration issues, I\u0026rsquo;ve come to appreciate its clarity.\nWhere We\u0026rsquo;re Headed The transition to ESM hasn\u0026rsquo;t been as smooth as many of us hoped, but progress is being made. Node.js\u0026rsquo;s ESM implementation continues to mature, tooling is improving, and more libraries are supporting both module systems or moving to ESM entirely.\nAs someone who builds for both browsers and Node.js, I\u0026rsquo;m cautiously optimistic. The future where we have a single, unified module system that works everywhere still seems achievable, even if the path there has been rockier than expected.\nThere\u0026rsquo;s a quote that\u0026rsquo;s been making the rounds in developer circles, which captures the sentiment well: \u0026ldquo;ESM is the future of JavaScript, and always will be.\u0026rdquo; It\u0026rsquo;s a bit cynical, but it contains a kernel of truth—the transition is taking longer than many expected, but the direction is clear.\nESM and Modern Tooling The rise of ESLint plugins specific to ESM has also been helpful in navigating this transition. Tools for statically analyzing imports and ensuring consistency across module boundaries have become essential, especially in larger codebases where different parts might be at different stages of the migration.\nOne pattern I\u0026rsquo;ve found particularly useful is explicit import delineation. By setting up ESLint rules that enforce how different kinds of imports are organized—external packages first, then internal absolute imports, followed by relative imports—we can make module boundaries more visible and maintainable. This has helped catch subtle ESM-related issues early.\nLessons Learned My journey through the ESM transition has taught me the value of pragmatism above all else. There\u0026rsquo;s no need to go all-in on ESM if your ecosystem isn\u0026rsquo;t ready—incremental approaches often yield better results with less frustration. I\u0026rsquo;ve found that investing time in understanding and optimizing build pipelines pays dividends, as good tooling can abstract away many of the inconsistencies between environments.\nCross-environment testing has saved me countless hours of debugging production issues. What works perfectly in your development setup might break spectacularly when deployed if the module assumptions differ even slightly. Perhaps most importantly, I\u0026rsquo;ve learned to stay informed while not chasing every trend. The JavaScript ecosystem is still evolving rapidly, and approaches that seem promising today might be obsolete tomorrow. A measured, thoughtful approach to adoption has consistently outperformed hasty migrations in my experience.\n","permalink":"/posts/2022-08-20-esm-field-notes-transition//posts/2022-08-20-esm-field-notes-transition/","summary":"\u003cp\u003eLibraries are shifting to ESM-only, build tools reveal sharp edges, and mixed ecosystems introduce subtle incompatibilities. This post offers a technically grounded snapshot of challenges and emerging patterns during this transitional phase, based on my hands-on experience and community input.\u003c/p\u003e\n\u003ch2 id=\"how-we-got-here\"\u003eHow We Got Here\u003c/h2\u003e\n\u003cp\u003eJavaScript\u0026rsquo;s module story has been\u0026hellip; complicated. In the early days, all JavaScript code lived in the global scope. Developers relied on naming conventions and immediately invoked function expressions (IIFEs) to prevent collisions. Then Node.js came along with CommonJS, giving us \u003ccode\u003erequire()\u003c/code\u003e and \u003ccode\u003emodule.exports\u003c/code\u003e. Finally, in 2015, ECMAScript 6 introduced native modules with \u003ccode\u003eimport\u003c/code\u003e and \u003ccode\u003eexport\u003c/code\u003e statements.\u003c/p\u003e","title":"ESM in the Wild: Field Notes from the JavaScript Module Transition"},{"content":"I wasn\u0026rsquo;t expecting this post to be this long when writing it, but I think it reflects that it\u0026rsquo;s been a surprisingly thorny challenge of generating unique IDs for HTML elements in React. What looks trivial on the surface \u0026ldquo;just create a unique string\u0026rdquo; becomes a complex problem when you factor in (streaming) server-side rendering, (partial) hydration, and accessibility requirements. With React 18\u0026rsquo;s release, the team finally shipped an official solution: the useId() hook.\nThe Problem: Why IDs Matter Let\u0026rsquo;s start with a common scenario. You\u0026rsquo;re building a form component that needs to associate labels with inputs:\nfunction FormField() { return ( \u0026lt;\u0026gt; \u0026lt;label htmlFor=\u0026#34;nameInput\u0026#34;\u0026gt;Name:\u0026lt;/label\u0026gt; \u0026lt;input id=\u0026#34;nameInput\u0026#34; type=\u0026#34;text\u0026#34; /\u0026gt; \u0026lt;/\u0026gt; ); } This works perfectly, until you use this component multiple times on the same page. Suddenly, you have duplicate IDs, violating HTML\u0026rsquo;s uniqueness requirement and breaking accessibility. Screen readers and assistive technologies rely on these connections between labels and form controls, so getting this right is not optional.\nThe Old Ways: Pre-React 18 Solutions Before React 18, we had several approaches to this problem, each with significant drawbacks.\n1. The Counter Approach let idCounter = 0; function FormField() { const id = `field-${idCounter++}`; return ( \u0026lt;\u0026gt; \u0026lt;label htmlFor={id}\u0026gt;Name:\u0026lt;/label\u0026gt; \u0026lt;input id={id} type=\u0026#34;text\u0026#34; /\u0026gt; \u0026lt;/\u0026gt; ); } This works fine for client-side rendering in a predictable order, but breaks down with server-side rendering. If component rendering order differs between server and client (which is common in React), you\u0026rsquo;ll get hydration mismatches.\n2. The UUID Approach import { v4 as uuidv4 } from \u0026#39;uuid\u0026#39;; function FormField() { // Warning: This causes hydration mismatches! const id = uuidv4(); return ( \u0026lt;\u0026gt; \u0026lt;label htmlFor={id}\u0026gt;Name:\u0026lt;/label\u0026gt; \u0026lt;input id={id} type=\u0026#34;text\u0026#34; /\u0026gt; \u0026lt;/\u0026gt; ); } UUIDs are appealing because they\u0026rsquo;re guaranteed to be unique. However, they\u0026rsquo;re generated randomly, so the server and client will generate different IDs, causing hydration errors. Plus, they\u0026rsquo;re unnecessarily long for DOM IDs.\n3. The useState Approach function FormField() { const [id] = useState(() =\u0026gt; `field-${Math.random().toString(36).substr(2, 9)}`); return ( \u0026lt;\u0026gt; \u0026lt;label htmlFor={id}\u0026gt;Name:\u0026lt;/label\u0026gt; \u0026lt;input id={id} type=\u0026#34;text\u0026#34; /\u0026gt; \u0026lt;/\u0026gt; ); } Using useState with an initialization function helps ensure the ID is stable across renders, but we still have the server/client mismatch problem.\n4. The Context Approach: React Aria\u0026rsquo;s SSR Solution Libraries like React Aria implemented more sophisticated solutions using context providers. Here\u0026rsquo;s a simplified version of React Aria\u0026rsquo;s pre-React 18 implementation:\n// Simplified version of React Aria\u0026#39;s implementation const SSRContext = createContext({ prefix: \u0026#39;\u0026#39;, current: 0 }); function SSRProvider({children}) { const [isSSR, setIsSSR] = useState(true); const parentContext = useContext(SSRContext); const value = useMemo(() =\u0026gt; ({ prefix: parentContext.prefix + (parentContext.current++).toString(36) + \u0026#39;:\u0026#39;, current: 0 }), [parentContext]); // Switch to client rendering after mount useLayoutEffect(() =\u0026gt; { setIsSSR(false); }, []); return ( \u0026lt;SSRContext.Provider value={value}\u0026gt; {children} \u0026lt;/SSRContext.Provider\u0026gt; ); } function useSSRSafeId() { const context = useContext(SSRContext); const counter = useCounter(context); // Get and increment counter return `id-${context.prefix}${counter}`; } This approach was one of the most robust solutions before React 18, but it had critical limitations:\nRelied on a context-based counter system that generated deterministic IDs based on the component\u0026rsquo;s position in the tree Required wrapping your application (or components) in an SSRProvider Used prefixes to handle nested contexts Attempted to maintain ID consistency across server and client While this worked reasonably well in React 17 and earlier versions, it breaks down in React 18 under several conditions:\nSuspense boundaries create rendering discontinuities: When two Suspense boundaries exist within the same context, they might hydrate in a different order than they were rendered on the server, disrupting the sequential ID generation.\nOut-of-order rendering: If sibling components or parent components suspend during hydration, the rendering order shifts. React Aria\u0026rsquo;s counter-based approach assumes a stable rendering sequence, which React 18\u0026rsquo;s streaming and Suspense fundamentally break.\nPartial hydration issues: When parts of the UI hydrate independently, the counter might increment differently on the server versus the client, leading to hydration mismatches or duplicate IDs.\nHere\u0026rsquo;s a concrete example: Imagine two sibling components under an SSRProvider rendering on the server with IDs prefix-0 and prefix-1. On the client, if a Suspense boundary around the first component delays its hydration, the second component hydrates first and might reassign prefix-0, causing a clash when the first component eventually hydrates.\nThese issues couldn\u0026rsquo;t be solved in user space, they required deep integration with React\u0026rsquo;s rendering system.\nUnderstand useId() React 18 introduces useId(), a hook specifically designed to solve this problem:\nimport { useId } from \u0026#39;react\u0026#39;; function FormField() { const id = useId(); return ( \u0026lt;\u0026gt; \u0026lt;label htmlFor={id}\u0026gt;Name:\u0026lt;/label\u0026gt; \u0026lt;input id={id} type=\u0026#34;text\u0026#34; /\u0026gt; \u0026lt;/\u0026gt; ); } That\u0026rsquo;s it. No more complex workarounds, no more SSR/hydration mismatches, no more accessibility issues.\nThe beauty of useId() is that it:\nGenerates a stable ID that\u0026rsquo;s consistent across renders Ensures server-generated IDs match client-generated ones Works seamlessly with concurrent rendering and streaming SSR Doesn\u0026rsquo;t require external dependencies or context providers How useId() Works Under the Hood On the server, React maintains a counter for ID generation. When a component calls useId(), it increments this counter and returns a string with a special format (like :r0:, :r1:, etc.). The IDs are deterministic based on the component\u0026rsquo;s position in the React tree during server rendering.\nDuring client-side hydration, React doesn\u0026rsquo;t regenerate these IDs. Instead, it preserves the server-generated IDs, avoiding mismatches. This works because the component tree structure should match between server and client.\nThe special case happens when an ID is first referenced during a regular client render (not hydration). In this scenario, React forces the useId() hook to \u0026ldquo;upgrade\u0026rdquo; to a client-generated ID, causing everything that references that ID to re-render with the new value. If some trees haven\u0026rsquo;t hydrated yet, React will hydrate them first before applying the update.\nThis approach handles the complexities of React 18\u0026rsquo;s streaming server renderer, where HTML can be delivered out-of-order. Solutions that worked in React 17 and earlier (like React Aria\u0026rsquo;s counter-based approach) break down in React 18 because you can no longer rely on a consistent rendering sequence.\nWhy React Aria\u0026rsquo;s Approach Couldn\u0026rsquo;t Survive React 18 React 18\u0026rsquo;s Streaming SSR fundamentally changes rendering order: With streaming SSR, React can send HTML in chunks based on Suspense boundaries. This means components might render in a completely different order than they appear in the final DOM, breaking any solution that relies on sequential rendering.\nSuspense boundary interactions break deterministic IDs: Consider this scenario:\n\u0026lt;SSRProvider\u0026gt; \u0026lt;Suspense fallback={\u0026lt;Spinner /\u0026gt;}\u0026gt; \u0026lt;ComponentA /\u0026gt; {/* Generates id-0 on server */} \u0026lt;/Suspense\u0026gt; \u0026lt;ComponentB /\u0026gt; {/* Generates id-1 on server */} \u0026lt;/SSRProvider\u0026gt; If during client-side hydration, ComponentA suspends (e.g., waiting for data), what happens? In React 17, hydration would wait. But in React 18, React can continue hydrating ComponentB while ComponentA is suspended. Now your ID sequence is broken: ComponentB might get id-0 on the client while the server gave it id-1.\nNo user-space solution could access React\u0026rsquo;s internal rendering queue: To solve this properly, a solution needs to understand React\u0026rsquo;s rendering sequence, Suspense boundaries, and hydration status, information only available to React internally. This is why useId() needed to be built into React itself.\nReact Aria\u0026rsquo;s solution worked by wrapping Suspense boundaries in their own SSRProvider, creating isolated ID spaces, but even this approach couldn\u0026rsquo;t handle all the edge cases introduced by React 18\u0026rsquo;s concurrent features.\nCreating Multiple Related IDs An elegant feature of useId() is the ability to create multiple related IDs from a single base ID. Instead of calling useId() multiple times, you can append suffixes:\nfunction NameFields() { const id = useId(); return ( \u0026lt;div\u0026gt; \u0026lt;label htmlFor={`${id}-firstName`}\u0026gt;First Name\u0026lt;/label\u0026gt; \u0026lt;div\u0026gt; \u0026lt;input id={`${id}-firstName`} type=\u0026#34;text\u0026#34; /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;label htmlFor={`${id}-lastName`}\u0026gt;Last Name\u0026lt;/label\u0026gt; \u0026lt;div\u0026gt; \u0026lt;input id={`${id}-lastName`} type=\u0026#34;text\u0026#34; /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; ); } This pattern scales beautifully for complex forms with many fields, while maintaining the hook rules (calling useId() just once per component).\nAccessibility and ARIA Attributes One of the most important use cases for useId() is supporting accessibility via ARIA attributes. Many ARIA attributes use ID references to connect elements:\nfunction Tooltip() { const id = useId(); return ( \u0026lt;\u0026gt; \u0026lt;button aria-describedby={id}\u0026gt;Hover me\u0026lt;/button\u0026gt; \u0026lt;div id={id} role=\u0026#34;tooltip\u0026#34; className=\u0026#34;tooltip\u0026#34;\u0026gt; This is a tooltip \u0026lt;/div\u0026gt; \u0026lt;/\u0026gt; ); } The useId() hook properly handles all ID-based ARIA attributes, including those that accept multiple IDs as space-separated lists:\nfunction ComplexLabel() { const headingId = useId(); const descriptionId = useId(); return ( \u0026lt;\u0026gt; \u0026lt;h2 id={headingId}\u0026gt;Account Settings\u0026lt;/h2\u0026gt; \u0026lt;p id={descriptionId}\u0026gt;Manage your account preferences\u0026lt;/p\u0026gt; \u0026lt;input aria-labelledby={`${headingId} ${descriptionId}`} /\u0026gt; \u0026lt;/\u0026gt; ); } Why It\u0026rsquo;s Critical for React 18\u0026rsquo;s Streaming SSR React 18 introduces a powerful new streaming server-side rendering implementation that leverages Suspense to deliver HTML in chunks. Instead of waiting for the entire page to render, React can send initial HTML quickly, then stream in more content as it becomes available.\nThis creates a fundamental problem for traditional ID generation approaches, including sophisticated ones like React Aria\u0026rsquo;s. Consider what happens in a streaming scenario:\nReact begins rendering the page on the server A component suspends while waiting for data React sends HTML for already-rendered components to the client The suspended component resolves and gets rendered React sends this component\u0026rsquo;s HTML as a separate chunk If you\u0026rsquo;re using a counter-based approach (even with contexts and prefixes like React Aria\u0026rsquo;s solution), the IDs generated might be completely different between server and client because:\nComponents might render in different orders Suspense boundaries might resolve in different sequences Client hydration might proceed in chunks, rather than a single pass There\u0026rsquo;s no optimal user-space solution for this problem. It requires deep integration with React\u0026rsquo;s rendering mechanism, which is exactly what useId() provides.\nFrom useOpaqueIdentifier to useId() The hook was first implemented as useOpaqueIdentifier in late 2019. This early version returned an opaque value: a string on the server, but a special object on the client that would warn if you tried to use its string value directly instead of passing it to a DOM attribute.\nBased on community feedback, the React team changed the implementation to return a regular string, making it more flexible for real-world use cases. This change enables features like appending suffixes and using IDs in space-separated lists for ARIA attributes.\nMigrations If you\u0026rsquo;re upgrading to React 18 and want to adopt useId(), here\u0026rsquo;s how to migrate from common patterns:\nFrom Counter-Based Approaches: // Before let counter = 0; function Component() { const id = `prefix-${counter++}`; // ... } // After function Component() { const id = useId(); // You can still add your prefix if needed const prefixedId = `prefix-${id}`; // ... } From UUID or Random String Approaches: // Before function Component() { const [id] = useState(() =\u0026gt; `id-${Math.random().toString(36).slice(2)}`); // ... } // After function Component() { const id = useId(); // ... } From React Aria or Similar Context-Based Solutions: // Before function Component() { const id = useId(); // React Aria\u0026#39;s useId // ... } // After // Remove the SSRProvider wrapper function Component() { const id = useId(); // React\u0026#39;s built-in useId // ... } If you\u0026rsquo;re using a library like React Aria that provides its own ID generation solution, check for updates that might leverage useId() internally. React Aria 3.15+ now uses React\u0026rsquo;s useId() when available, falling back to their previous implementation for compatibility with older React versions.\nReal-World Impact In my recent projects, switching to useId() eliminated several classes of bugs:\nHydration errors in Next.js applications that were caused by ID mismatches Accessibility issues reported by automated testing tools Edge cases in complex forms with conditionally rendered fields One particularly troublesome project involved a multi-step form with dynamically added form fields. With an old UUID approach, we would occasionally see flickering inputs and lost focus when a user was typing, all because IDs would change during rerenders. Switching to useId() fixed these issues.\nLooking Beyond Form Controls While form accessibility is the most common use case, useId() is valuable anywhere you need stable, unique identifiers. Here\u0026rsquo;s a detailed look at four key areas where useId() shines beyond simple form controls:\n1. SVG Definitions with \u0026lt;defs\u0026gt; Elements SVG elements often require unique identifiers, especially when using definitions that are referenced elsewhere in the markup. This is critical for maintaining proper references when components render multiple times:\nfunction GradientButton() { const gradientId = useId(); return ( \u0026lt;\u0026gt; \u0026lt;svg width=\u0026#34;0\u0026#34; height=\u0026#34;0\u0026#34;\u0026gt; \u0026lt;defs\u0026gt; \u0026lt;linearGradient id={gradientId} x1=\u0026#34;0%\u0026#34; y1=\u0026#34;0%\u0026#34; x2=\u0026#34;100%\u0026#34; y2=\u0026#34;0%\u0026#34;\u0026gt; \u0026lt;stop offset=\u0026#34;0%\u0026#34; stopColor=\u0026#34;#3366FF\u0026#34; /\u0026gt; \u0026lt;stop offset=\u0026#34;100%\u0026#34; stopColor=\u0026#34;#00CCFF\u0026#34; /\u0026gt; \u0026lt;/linearGradient\u0026gt; \u0026lt;/defs\u0026gt; \u0026lt;/svg\u0026gt; \u0026lt;button style={{ background: `url(#${gradientId})` }}\u0026gt; Gradient Button \u0026lt;/button\u0026gt; \u0026lt;/\u0026gt; ); } Without stable IDs, SVG references can break during hydration or when multiple instances of a component render on the same page. This leads to missing gradients, patterns, or animations. The server/client consistency of useId() ensures SVG references remain intact across renders and hydration.\n2. Modal Dialogs and Their Triggers Modals require proper accessibility connections to their triggers, typically using ARIA attributes like aria-labelledby or aria-controls:\nfunction AccessibleModal() { const [isOpen, setIsOpen] = useState(false); const headingId = useId(); const descriptionId = useId(); return ( \u0026lt;\u0026gt; \u0026lt;button onClick={() =\u0026gt; setIsOpen(true)} aria-controls={headingId} \u0026gt; Open Settings \u0026lt;/button\u0026gt; {isOpen \u0026amp;\u0026amp; ( \u0026lt;div role=\u0026#34;dialog\u0026#34; aria-labelledby={headingId} aria-describedby={descriptionId} \u0026gt; \u0026lt;h2 id={headingId}\u0026gt;Application Settings\u0026lt;/h2\u0026gt; \u0026lt;p id={descriptionId}\u0026gt;Configure your application preferences\u0026lt;/p\u0026gt; \u0026lt;button onClick={() =\u0026gt; setIsOpen(false)}\u0026gt;Close\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; )} \u0026lt;/\u0026gt; ); } These relationships are critical for screen readers to announce the modal\u0026rsquo;s purpose when it opens. The stable IDs provided by useId() ensure these connections work properly, especially in server-rendered applications where components might hydrate in varying orders.\n3. Tooltip and Popover Relationships Tooltips and popovers have similar accessibility requirements, often using aria-describedby to connect the tooltip content to its trigger:\nfunction HelpTooltip({ text, children }) { const [isVisible, setIsVisible] = useState(false); const tooltipId = useId(); return ( \u0026lt;div className=\u0026#34;tooltip-container\u0026#34;\u0026gt; \u0026lt;div aria-describedby={isVisible ? tooltipId : undefined} onMouseEnter={() =\u0026gt; setIsVisible(true)} onMouseLeave={() =\u0026gt; setIsVisible(false)} \u0026gt; {children} \u0026lt;/div\u0026gt; {isVisible \u0026amp;\u0026amp; ( \u0026lt;div id={tooltipId} role=\u0026#34;tooltip\u0026#34; className=\u0026#34;tooltip\u0026#34; \u0026gt; {text} \u0026lt;/div\u0026gt; )} \u0026lt;/div\u0026gt; ); } With useId(), tooltip relationships remain stable even when components reorder or when server rendering streams HTML in chunks. This ensures screen readers can correctly associate tooltip content with the elements they describe.\n4. Custom Component Libraries Component libraries face unique challenges with ID generation, as they need to ensure unique IDs across potentially hundreds of instances of their components:\n// In a component library function Accordion() { // Base ID for the accordion const id = useId(); const [activePanel, setActivePanel] = useState(0); const panels = [ { title: \u0026#34;Section 1\u0026#34;, content: \u0026#34;Content 1\u0026#34; }, { title: \u0026#34;Section 2\u0026#34;, content: \u0026#34;Content 2\u0026#34; } ]; return ( \u0026lt;div className=\u0026#34;accordion\u0026#34;\u0026gt; {panels.map((panel, index) =\u0026gt; { const headerId = `${id}-header-${index}`; const panelId = `${id}-panel-${index}`; return ( \u0026lt;div key={index} className=\u0026#34;accordion-item\u0026#34;\u0026gt; \u0026lt;h3\u0026gt; \u0026lt;button id={headerId} aria-expanded={activePanel === index} aria-controls={panelId} onClick={() =\u0026gt; setActivePanel(index)} \u0026gt; {panel.title} \u0026lt;/button\u0026gt; \u0026lt;/h3\u0026gt; \u0026lt;div id={panelId} role=\u0026#34;region\u0026#34; aria-labelledby={headerId} hidden={activePanel !== index} \u0026gt; {panel.content} \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; ); })} \u0026lt;/div\u0026gt; ); } A component library can generate consistent IDs across server and client renders, ensuring accessibility relationships work properly regardless of how many times a component is instantiated on a page.\nConclusion The useId() hook in React 18 addresses specific challenges around generating stable IDs that are consistent between server and client rendering. This built-in solution resolves several technical issues that previous approaches couldn\u0026rsquo;t fully solve, particularly in the context of React 18\u0026rsquo;s streaming SSR and concurrent rendering features.\nThe hook\u0026rsquo;s implementation at the framework level provides access to React\u0026rsquo;s internal rendering system, enabling it to maintain ID consistency across various rendering scenarios that user-space solutions couldn\u0026rsquo;t reliably handle. This is particularly important for applications that use server-side rendering, where hydration mismatches can cause accessibility issues or visual inconsistencies.\nThe integration with React\u0026rsquo;s core rendering mechanism allows useId() to handle complex cases involving Suspense boundaries, out-of-order rendering, and partial hydration. For developers working with React 18, adopting useId() provides a standardized approach to ID generation that works reliably across rendering environments and component structures.\n","permalink":"/posts/2022-06-10-useid-id-gen-headaches//posts/2022-06-10-useid-id-gen-headaches/","summary":"\u003cp\u003eI wasn\u0026rsquo;t expecting this post to be this long when writing it, but I think it reflects that it\u0026rsquo;s been a surprisingly thorny challenge of generating unique IDs for HTML elements in React. What looks trivial on the surface \u0026ldquo;just create a unique string\u0026rdquo; becomes a complex problem when you factor in (streaming) server-side rendering, (partial) hydration, and accessibility requirements. With React 18\u0026rsquo;s release, the team finally shipped an official solution: the useId() hook.\u003c/p\u003e","title":"React 18's useId(): The End of Element ID Generation Headaches"},{"content":"React 18 launched as a stable release in March, bringing the long-anticipated concurrent rendering engine to production environments. After spending several weeks experiencing it across multiple projects, the performance improvements and developer experience enhancements are visible. What was theoretical in the alpha and beta releases has now proven its worth in real-world applications.\nConcurrent Rendering: From Theory to Practice In my previous coverage of React 18\u0026rsquo;s alpha release, I explored the shift from \u0026ldquo;Concurrent Mode\u0026rdquo; to \u0026ldquo;Concurrent React\u0026rdquo; and the theoretical underpinnings of what makes concurrency special. Now that we\u0026rsquo;re using it in real applications, let\u0026rsquo;s see how these concepts translate to practical benefits.\nA client\u0026rsquo;s e-commerce product management system previously struggled when administrators filtered through thousands of inventory items. After upgrading to React 18, operations that used to lock up the interface for seconds now remain responsive, all without rewriting the core application logic.\nUnder the Hood: How Concurrent Rendering Prevents UI Freezing The key innovation powering this improvement is interruptible rendering. In previous React versions, rendering was synchronous and blocking - once React started rendering a component tree, it had to finish that work before returning control to the browser. During complex updates, this would block the main thread, preventing it from handling user interactions like typing or clicking.\nWith React 18\u0026rsquo;s concurrent renderer, rendering work is now broken into small chunks with different priorities:\nTime slicing: React performs a bit of rendering work, then pauses and yields back to the browser, allowing it to handle user events, run animations, and maintain responsiveness. If a higher-priority update comes in (like a user typing), React can abandon its current render and prioritize the urgent work.\nPrioritized updates: React 18 distinguishes between urgent updates (user interactions) and background transitions (rendering filtered results). This prioritization ensures that your application stays responsive even during complex state updates.\nCooperative scheduling: Rather than monopolizing the main thread, React now cooperatively yields control back to the browser, creating space for user interactions to be processed promptly, even when there\u0026rsquo;s heavy rendering work happening in the background.\nHere\u0026rsquo;s how this works in practice with the Transition API:\nfunction ProductFilter() { const [query, setQuery] = useState(\u0026#39;\u0026#39;); const [filteredProducts, setFilteredProducts] = useState(allProducts); const [isPending, startTransition] = useTransition(); function handleFilterChange(e) { // This is urgent - show what the user typed immediately setQuery(e.target.value); // This is deferrable - can be interrupted if needed startTransition(() =\u0026gt; { // Even with thousands of products, UI stays responsive setFilteredProducts(allProducts.filter( product =\u0026gt; product.name.includes(e.target.value) )); }); } return ( \u0026lt;\u0026gt; \u0026lt;input value={query} onChange={handleFilterChange} /\u0026gt; {isPending \u0026amp;\u0026amp; \u0026lt;Spinner /\u0026gt;} \u0026lt;ProductList products={filteredProducts} /\u0026gt; \u0026lt;/\u0026gt; ); } This approach is dramatically different from workarounds we previously relied on like debouncing or throttling, as it doesn\u0026rsquo;t introduce arbitrary delays - React is intelligent about when to yield control based on the user\u0026rsquo;s interaction with your app.\nServer-Side Rendering Reimagined One area where React 18 truly shines is its complete overhaul of the server-side rendering architecture through a new feature called \u0026ldquo;Suspense SSR.\u0026rdquo;\nIn previous React versions, SSR was an all-or-nothing affair. Your server would render the entire page to HTML before sending any content to the user. If one component was slow (perhaps fetching data), it would delay the entire page.\nReact 18 introduces streaming SSR with selective hydration, which fundamentally changes this model:\n// Server component using the new SSR model function App() { return ( \u0026lt;Layout\u0026gt; \u0026lt;NavBar /\u0026gt; \u0026lt;Suspense fallback={\u0026lt;ArticleSkeleton /\u0026gt;}\u0026gt; \u0026lt;Article /\u0026gt; \u0026lt;/Suspense\u0026gt; \u0026lt;Suspense fallback={\u0026lt;SidebarSkeleton /\u0026gt;}\u0026gt; \u0026lt;Sidebar /\u0026gt; \u0026lt;/Suspense\u0026gt; \u0026lt;/Layout\u0026gt; ); } With this approach, the server sends HTML for the \u0026lt;Layout\u0026gt; and \u0026lt;NavBar\u0026gt; immediately, along with the skeleton placeholders. As \u0026lt;Article\u0026gt; and \u0026lt;Sidebar\u0026gt; become available, the server streams their HTML to the browser. Meanwhile, React on the client can hydrate interactive parts as soon as their HTML arrives, without waiting for the entire page.\nNew Hooks for the Concurrent World React 18 introduces several new hooks that help manage the complexities of concurrent rendering:\nuseId Update: We are diving deeper in the new post - \u0026ldquo;React 18\u0026rsquo;s useId(): The End of Element ID Generation Headaches\u0026rdquo;\nThis hook generates unique IDs that are stable across the server and client, solving a longstanding issue with SSR:\nfunction AccessibleInput() { const id = useId(); return ( \u0026lt;\u0026gt; \u0026lt;label htmlFor={id}\u0026gt;Email\u0026lt;/label\u0026gt; \u0026lt;input id={id} type=\u0026#34;email\u0026#34; /\u0026gt; \u0026lt;/\u0026gt; ); } Before useId, we often resorted to problematic workarounds for generating IDs that wouldn\u0026rsquo;t mismatch between server and client renders. This seemingly small addition has improved our accessibility implementation significantly.\nuseSyncExternalStore This hook provides a consistent way to read from external data sources:\nconst state = useSyncExternalStore( subscribe, // How to subscribe to the store getSnapshot, // How to get the current value getServerSnapshot // Optional: How to get the value during SSR ); We\u0026rsquo;ve found this particularly useful when integrating with non-React state management libraries like RxJS. It ensures external stores play nicely with concurrent rendering without showing inconsistent UI states.\nuseInsertionEffect This specialized hook runs before any DOM mutations:\nuseInsertionEffect(() =\u0026gt; { // Ideal place to inject critical CSS const style = document.createElement(\u0026#39;style\u0026#39;); style.innerHTML = \u0026#39;.dynamic-class { color: red }\u0026#39;; document.head.appendChild(style); return () =\u0026gt; { // Clean up style.remove(); }; }); While most developers won\u0026rsquo;t use this directly, CSS-in-JS library authors are already leveraging it to improve performance and avoid layout thrashing.\nUpgrading Stories: What We\u0026rsquo;ve Learned Upgrading several production applications to React 18 has taught me some valuable lessons that might help others:\nAutomatic Batching Reveals Bugs React 18\u0026rsquo;s automatic batching is a tremendous performance win, but it can reveal sequencing bugs in your code. In one application, we discovered components that depended on state updates being processed immediately rather than batched. The fix was straightforward - using the new flushSync API in the few cases where we truly need immediate updates:\nimport { flushSync } from \u0026#39;react-dom\u0026#39;; function handleClick() { flushSync(() =\u0026gt; { setCounter(c =\u0026gt; c + 1); }); // DOM is updated here flushSync(() =\u0026gt; { setFlag(f =\u0026gt; !f); }); // DOM is updated again } Strict Mode Gets Stricter React 18\u0026rsquo;s Strict Mode now double-mounts components to help find effects without proper cleanup. This simulates what will happen with concurrent rendering, where components might mount and unmount multiple times before being visible.\nIn one dashboard, we discovered several data fetching effects without proper AbortController cleanup:\n// Before: Missing cleanup useEffect(() =\u0026gt; { fetch(\u0026#39;/api/data\u0026#39;).then(res =\u0026gt; setData(res.json())); }, []); // After: Proper cleanup for React 18 useEffect(() =\u0026gt; { const controller = new AbortController(); fetch(\u0026#39;/api/data\u0026#39;, { signal: controller.signal }) .then(res =\u0026gt; res.json()) .then(data =\u0026gt; setData(data)) .catch(err =\u0026gt; { if (err.name !== \u0026#39;AbortError\u0026#39;) { setError(err); } }); return () =\u0026gt; controller.abort(); }, []); Transition API for UX Improvements The useTransition hook and startTransition function have been game-changers for interactive experiences. For example, in a filtering interface with thousands of items, we previously used debouncing to avoid freezing the UI. With transitions, the code is cleaner and the UX is better:\nconst [isPending, startTransition] = useTransition(); const [filterText, setFilterText] = useState(\u0026#39;\u0026#39;); const [filteredItems, setFilteredItems] = useState(items); function handleFilterChange(e) { // This update is urgent: show what the user typed setFilterText(e.target.value); // This update can be deferred if the system is busy startTransition(() =\u0026gt; { setFilteredItems( items.filter(item =\u0026gt; item.name.includes(e.target.value) ) ); }); } return ( \u0026lt;\u0026gt; \u0026lt;input value={filterText} onChange={handleFilterChange} /\u0026gt; {isPending ? \u0026lt;Spinner /\u0026gt; : null} \u0026lt;ItemList items={filteredItems} /\u0026gt; \u0026lt;/\u0026gt; ); The isPending state is particularly useful, as it lets us show a subtle loading indicator without replacing the existing content.\nPerformance in Numbers Benchmarking the aforementioned client\u0026rsquo;s e-commerce product catalog before and after upgrading to React 18 revealed some impressive improvements:\nTime to Interactive (TTI) improved by 32% thanks to streaming SSR Input responsiveness in search filters improved by 45% using transitions The most significant gain came from eliminating what we call \u0026ldquo;UI waterfalls\u0026rdquo; - where one loading state would finish only to trigger another loading state. With Suspense and concurrent rendering, we now show the right placeholder upfront and fill in content progressively.\nUpcoming Now that React 18 is in our production environments, what\u0026rsquo;s next? The React team has indicated that upcoming work will focus on:\nReact Server Components moving toward stable Further improvements to Suspense, especially for data fetching New compiler optimizations that take advantage of concurrent rendering For our projects, the immediate focus is on refactoring more parts of our applications to fully leverage transitions and suspense boundaries. The architecture patterns enabled by React 18 reward thoughtful UI decomposition, and we\u0026rsquo;re just beginning to explore their potential.\n","permalink":"/posts/2022-05-19-react-18-production//posts/2022-05-19-react-18-production/","summary":"\u003cp\u003eReact 18 launched as a stable release in March, bringing the long-anticipated concurrent rendering engine to production environments. After spending several weeks experiencing it across multiple projects, the performance improvements and developer experience enhancements are visible. What was theoretical in the alpha and beta releases has now proven its worth in real-world applications.\u003c/p\u003e\n\u003ch2 id=\"concurrent-rendering-from-theory-to-practice\"\u003eConcurrent Rendering: From Theory to Practice\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"../2021-08-14-react-18-alpha\"\u003eIn my previous coverage of React 18\u0026rsquo;s alpha release\u003c/a\u003e, I explored the shift from \u0026ldquo;Concurrent Mode\u0026rdquo; to \u0026ldquo;Concurrent React\u0026rdquo; and the theoretical underpinnings of what makes concurrency special. Now that we\u0026rsquo;re using it in real applications, let\u0026rsquo;s see how these concepts translate to practical benefits.\u003c/p\u003e","title":"React 18 in Production: Concurrent Rendering Delivers on its Promise"},{"content":"JavaScript has come a long way in terms of language features, but one area that has notoriously lagged behind is date and time handling. The ECMAScript Date object has been a source of confusion and frustration due to its inconsistent behaviors, mutability, tricky timezone handling, and limitations. This leaves many developers relying on third-party libraries to perform even fundamental date calculations.\nFortunately, we have champions who are well-versed of these challenges, leading to the Temporal API proposal. Having reached Stage 3 in the ECMAScript standardization process, Temporal offers a modern, robust, and ergonomic alternative for working with dates, times, time zones, and durations.\nWhat\u0026rsquo;s Wrong with Date? Before discussing what Temporal brings to the table, it’s important to understand where the existing Date object falls short:\n// Beware: month is zero-based! const date = new Date(2022, 0, 15); // January 15, 2022 // Adding 2 days mutates the original object date.setDate(date.getDate() + 2); // Printing in different time zones affects only formatting, not the underlying datetime const options = { timeZone: \u0026#39;America/New_York\u0026#39; }; console.log(new Date().toLocaleString(\u0026#39;en-US\u0026#39;, options)); // The internal date/time remains in the local timezone or UTC Common pain points include: Zero-indexed months, a recurring source of off-by-one errors. Mutable date objects, which can lead to bugs as they’re modified in-place. Poor support for explicit time zone calculations. Inconsistent parsing behavior, especially with ISO strings. Ambiguities when dealing with daylight saving or leap seconds. Due to these issues, developers have been forced to adopt external libraries like Moment.js, date-fns, Day.js, or Luxon, increasing project complexity and bundle sizes.\nIntroducing the Temporal API The Temporal API addresses these frustrations with a comprehensive, immutable, and explicit suite of types and methods tailored for different date/time use cases.\nHere’s a basic example:\nimport { Temporal } from \u0026#39;@js-temporal/polyfill\u0026#39;; // Creating a calendar date (note: months are intuitive, starting at 1) const date = Temporal.PlainDate.from({ year: 2022, month: 1, day: 15 }); // Date math returns new instances, leaving the original unmodified const newDate = date.add({ days: 2 }); console.log(date.toString()); // 2022-01-15 console.log(newDate.toString()); // 2022-01-17 // Zoned current date/time retrieval const nowInNY = Temporal.Now.zonedDateTimeISO(\u0026#39;America/New_York\u0026#39;); const nowInTokyo = Temporal.Now.zonedDateTimeISO(\u0026#39;Asia/Tokyo\u0026#39;); Key Temporal types: PlainDate: Dates without time and without timezone info. PlainTime: Time-of-day components, no date. PlainDateTime: Date and time, without timezone. ZonedDateTime: Date/time with timezone, essential for accurate calendar events worldwide. PlainYearMonth: Year and month, useful for billing cycles. PlainMonthDay: Month and day (no year), great for birthdays. Instant: A specific point in UTC time. Duration: A length of time, without a start or end. This explicit modeling makes it easy to pick the right type based on your needs and avoids confusing conversions.\nPractical Examples Calculating Date Differences // With Date (imperative and error-prone) const start = new Date(2022, 0, 1); const end = new Date(2022, 0, 15); const diffDays = Math.floor((end - start) / (1000 * 60 * 60 * 24)); console.log(diffDays); // 14 // With Temporal (simpler and more expressive) const startDate = Temporal.PlainDate.from({ year: 2022, month: 1, day: 1 }); const endDate = Temporal.PlainDate.from({ year: 2022, month: 1, day: 15 }); const diff = startDate.until(endDate); console.log(diff.days); // 14 Time Zone Conversion // Create New York meeting time const meeting = Temporal.ZonedDateTime.from({ timeZone: \u0026#39;America/New_York\u0026#39;, year: 2022, month: 1, day: 20, hour: 15, minute: 30 }); // Convert to Tokyo’s time zone const meetingInTokyo = meeting.withTimeZone(\u0026#39;Asia/Tokyo\u0026#39;); console.log(meeting.hour); // 15 (3 PM NY time) console.log(meetingInTokyo.hour); // 5 (5 AM the next day in Tokyo) Date Arithmetic const today = Temporal.Now.plainDateISO(); const nextWeek = today.add({ days: 7 }); const lastMonth = today.subtract({ months: 1 }); // Getting a duration between two dates const interval = today.until(nextWeek); console.log(interval.days); // 7 Marching ahead JavaScript\u0026rsquo;s Temporal API proposal offers a long-needed, modern approach for handling date and time that addresses the shortcomings of the classic Date object. It stands to reduce reliance on third-party libraries, improve developer experience, and avoid many tough bugs related to time zones and date arithmetic.\nWhile it awaits full adoption into the language and browser implementations, developers can explore it today using the official polyfill to modernize their codebases and prepare for the future.\nI\u0026rsquo;m hopeful that browser vendors and JavaScript engine implementers will prioritize Temporal and add support in the near future. Unlike some Stage 3 proposals that have languished for years (looking at you, proper tail calls which remain Safari-only after years in the spec), Temporal addresses such a fundamental developer need that it deserves rapid adoption.\nThe TC39 committee has done stellar work crafting a robust solution to date/time handling, now we need implementers to bring it to life in V8, SpiderMonkey, and JavaScriptCore. Until then, the polyfill provides an excellent bridge that lets us start enjoying these benefits immediately.\nIf you work with dates and times regularly in JavaScript, getting familiar with Temporal is a smart investment that will pay dividends when native implementations arrive.\n","permalink":"/posts/2022-02-26-better-date-handling-exploration-temporal//posts/2022-02-26-better-date-handling-exploration-temporal/","summary":"\u003cp\u003eJavaScript has come a long way in terms of language features, but one area that has notoriously lagged behind is date and time handling. The ECMAScript \u003ccode\u003eDate\u003c/code\u003e object has been a source of confusion and frustration due to its inconsistent behaviors, mutability, tricky timezone handling, and limitations. This leaves many developers relying on third-party libraries to perform even fundamental date calculations.\u003c/p\u003e\n\u003cp\u003eFortunately, we have \u003ca href=\"https://github.com/tc39/proposal-temporal?tab=readme-ov-file#champions\"\u003echampions\u003c/a\u003e who are well-versed of these challenges, leading to the Temporal API proposal. Having reached Stage 3 in the ECMAScript standardization process, Temporal offers a modern, robust, and ergonomic alternative for working with dates, times, time zones, and durations.\u003c/p\u003e","title":"Towards Better Date Handling in JavaScript: An Exploration to the Temporal API Proposal"},{"content":"Last month, I was working on a client\u0026rsquo;s React application—a fairly typical dashboard for managing customer information in a marketing services product—when we discovered something alarming. Personal customer data was silently making its way into our Sentry error logs. This discovery kicked off an urgent investigation that revealed some non-obvious ways React applications can leak sensitive data to monitoring tools. Sharing this experience because the it could happen to any team, yet serious enough that it deserves attention.\nNote: The code snippets in this article are simplified for illustration. The actual production code contained complex business logic, and the identifiers that were logging PII were not immediately obvious in our codebase. The examples here have been distilled to clearly demonstrate the issue.\nThe Setup: Our Monitoring Infrastructure Like many modern web applications, this dashboard used React (v17) on the frontend with a Node.js backend. We had integrated Sentry for error monitoring early in development—a standard practice to catch issues in production environments.\nOur initial Sentry configuration looked something like this:\nimport * as Sentry from \u0026#39;@sentry/react\u0026#39;; import { Integrations } from \u0026#39;@sentry/tracing\u0026#39;; Sentry.init({ dsn: \u0026#39;https://examplePublicKey@o0.ingest.sentry.io/0000001\u0026#39;, integrations: [new Integrations.BrowserTracing()], tracesSampleRate: 0.1, // We relied on Sentry\u0026#39;s default PII scrubbing // plus some basic customization beforeSend(event) { // Basic custom scrubbing for our specific app patterns if (event.request \u0026amp;\u0026amp; event.request.url) { event.request.url = event.request.url.replace(/\\/users\\/\\d+/, \u0026#39;/users/[REDACTED]\u0026#39;); } // ...other scrubbings return event; } }); We were relying heavily on Sentry\u0026rsquo;s built-in PII scrubbing capabilities, which by default can handle things like passwords, credit cards, and social security numbers. We had also added some custom URL pattern scrubbing specific to our application. According to Sentry\u0026rsquo;s documentation, this approach could have been sufficient for basic PII protection.\nThe Unsettling Discovery The issue came to light during a routine compliance review. The data protection officer was examining various systems for GDPR compliance when they asked to show them exactly what data was being sent to third parties. While reviewing Sentry logs together, we spotted customer email addresses and partial account numbers appearing in several error reports.\nFurther investigation revealed personal data appearing in three distinct locations within our Sentry events:\nIn the React error boundary extra context data In the breadcrumbs automatically captured by Sentry In serialized Redux state snapshots added to error reports The Investigation: Tracking Down the Source The first step was to understand how this information was ending up in our error logs despite Sentry\u0026rsquo;s built-in protections. I started by examining recent errors in our dashboard that contained PII.\nOne pattern quickly emerged: most leaks were happening inside custom error boundaries we had implemented around critical components. These error boundaries were designed to gracefully handle failures but were inadvertently capturing and forwarding sensitive context data.\nHere\u0026rsquo;s what one of our error boundaries looked like:\nclass ProfileErrorBoundary extends React.Component { state = { hasError: false }; componentDidCatch(error, info) { this.setState({ hasError: true }); // Here\u0026#39;s where the problem was happening Sentry.withScope(scope =\u0026gt; { // We were adding the entire error context scope.setExtra(\u0026#34;componentStack\u0026#34;, info.componentStack); // This was the real issue - capturing state that contained PII scope.setExtra(\u0026#34;componentState\u0026#34;, this.state); // And sometimes even passing along props with sensitive data scope.setContext(\u0026#34;componentProps\u0026#34;, this.props); Sentry.captureException(error); }); } render() { if (this.state.hasError) { return \u0026lt;ErrorFallback /\u0026gt;; } return this.props.children; } } This error boundary was wrapping components that handled customer profile data, so this.props often contained full customer objects with email addresses, account details, and other sensitive information.\nThe issue became clearer: Sentry\u0026rsquo;s default scrubbing was looking for known patterns in standard event fields, but our custom error boundaries were adding PII under custom attributes that bypassed the automatic scrubbing.\nThe Root Cause: Multiple Vectors for Data Leakage After more digging, I found several interconnected issues:\n1. Custom Error Context in Error Boundaries Our error boundaries were capturing too much context, including props and state containing sensitive customer data. When errors occurred, this entire context was being sent to Sentry.\n2. Redux Middleware Capturing State We were using a custom Redux middleware that would capture state slices for debugging:\nconst sentryReduxMiddleware = store =\u0026gt; next =\u0026gt; action =\u0026gt; { try { return next(action); } catch (err) { // Capturing the entire profile slice of state - bad idea! const currentState = store.getState(); Sentry.captureException(err, { extra: { action: action, state: currentState.userProfile // Contains PII! } }); throw err; } }; 3. Detailed Console Logging Captured as Breadcrumbs Throughout our application, developers had added detailed logging that sometimes included PII:\nconsole.log(\u0026#34;Loaded profile for user:\u0026#34;, user.email, user.accountDetails); Sentry\u0026rsquo;s automatic breadcrumb collection was picking up these console logs, complete with the sensitive data we were logging.\nWhy Sentry\u0026rsquo;s Built-in Scrubbing Wasn\u0026rsquo;t Catching This Sentry does provide built-in data scrubbing capabilities as documented in their guides, but there are limitations:\nDefault scrubbing primarily targets common patterns like credit card numbers and social security numbers It focuses on standard fields like cookies, headers, and query strings Custom context data added via setExtra and setContext often bypasses default scrubbing Breadcrumbs with console logs aren\u0026rsquo;t deeply scrubbed by default The server-side Advanced Data Scrubbing feature requires explicit configuration with custom rules Our issues fell through these gaps - we were adding PII in custom contexts, breadcrumbs, and extra fields that weren\u0026rsquo;t covered by the default scrubbing patterns.\nThe Solution: Multi-layered PII Protection Fixing this issue required multiple approaches:\n1. Revise Error Boundaries to Limit Context We updated our error boundaries to avoid capturing sensitive props and state:\ncomponentDidCatch(error, info) { this.setState({ hasError: true }); Sentry.withScope(scope =\u0026gt; { // Only include the component stack, not state or props scope.setExtra(\u0026#34;componentStack\u0026#34;, info.componentStack); // If we absolutely need some context, explicitly scrub it first if (this.props.userProfile) { const safeProfile = { hasData: !!this.props.userProfile, // Only include non-PII fields we need for debugging sections: Object.keys(this.props.userProfile) }; scope.setContext(\u0026#34;profile\u0026#34;, safeProfile); } Sentry.captureException(error); }); } 2. Configure Sentry\u0026rsquo;s Advanced Data Scrubbing We implemented more thorough server-side scrubbing rules in Sentry\u0026rsquo;s dashboard:\nWe added custom rules to target our specific patterns, including email addresses in custom contexts and account numbers in error messages.\n3. Fix Redux Middleware We rewrote our Redux error reporting middleware to be PII-aware:\nconst sentryReduxMiddleware = store =\u0026gt; next =\u0026gt; action =\u0026gt; { try { return next(action); } catch (err) { // Only capture the action type and safe metadata const sanitizedAction = { type: action.type, hasPayload: !!action.payload }; // Avoid capturing sensitive state slices entirely Sentry.captureException(err, { extra: { action: sanitizedAction, // Only include safe state information stateKeys: Object.keys(store.getState()) } }); throw err; } }; 4. Improve Logging Practices We implemented a custom logger that automatically redacts sensitive information:\nconst safeLog = (message, ...args) =\u0026gt; { const safeArgs = args.map(arg =\u0026gt; { if (typeof arg === \u0026#39;object\u0026#39; \u0026amp;\u0026amp; arg !== null) { // Create shallow copy to avoid mutating the original const safeCopy = {...arg}; // Redact known PII fields [\u0026#39;email\u0026#39;, \u0026#39;password\u0026#39;, \u0026#39;ssn\u0026#39;, \u0026#39;accountNumber\u0026#39;].forEach(field =\u0026gt; { if (field in safeCopy) safeCopy[field] = \u0026#39;[REDACTED]\u0026#39;; }); return safeCopy; } return arg; }); console.log(message, ...safeArgs); }; // Usage safeLog(\u0026#34;Loaded profile for user:\u0026#34;, user); // Will redact sensitive fields Lessons Learned: Privacy by Design This experience taught me several valuable lessons about handling PII in frontend applications:\nDon\u0026rsquo;t trust default scrubbing alone. While Sentry offers good baseline protection, custom application needs require custom scrubbing rules.\nBe particularly careful with custom contexts. Error reporting middleware, error boundaries, and custom logging are common paths for PII leakage.\nAudit your breadcrumbs. Console logs captured as breadcrumbs are easy to overlook but can contain a wealth of sensitive information.\nTest your error monitoring in production-like conditions. Only by simulating real errors with real (or realistic) data could we have caught this issue earlier.\nUse server-side scrubbing as your last line of defense. Client-side prevention is good, but server-side rules ensure that even if PII slips through, it won\u0026rsquo;t be stored.\nMoving Forward As applications become more complex and privacy regulations more stringent, we need to approach error monitoring with the same care we apply to other aspects of security. The tradeoff between detailed error information and privacy protection isn\u0026rsquo;t always easy to navigate, but awareness of these potential issues is the first step.\nFor the application, the team has now implemented a periodical \u0026ldquo;monitoring audit\u0026rdquo; where we intentionally trigger various error conditions and examine what data is captured by our monitoring tools. This has already helped us catch several other minor issues before they became problems.\nThe key takeaway? Even with a robust tool like Sentry that offers built-in PII protection, the complexity of modern React applications creates numerous opportunities for sensitive data to leak through. A defense-in-depth approach is essential, with protection at both the client and server levels.\n","permalink":"/posts/2021-11-05-react-sentry-pii//posts/2021-11-05-react-sentry-pii/","summary":"\u003cp\u003eLast month, I was working on a client\u0026rsquo;s React application—a fairly typical dashboard for managing customer information in a marketing services product—when we discovered something alarming. Personal customer data was silently making its way into our Sentry error logs. This discovery kicked off an urgent investigation that revealed some non-obvious ways React applications can leak sensitive data to monitoring tools. Sharing this experience because the it could happen to any team, yet serious enough that it deserves attention.\u003c/p\u003e","title":"Observability Hygiene: When React Components Accidentally Expose PII to Sentry"},{"content":"The Status Quo and Its Discontents Before diving into these new tools, let\u0026rsquo;s consider where we are. For years, Webpack has dominated the bundling landscape, with tools like Babel handling transpilation. These tools, written in JavaScript, have served us well, but as projects grow larger and demands for features increase, their performance limitations have become increasingly apparent.\nA typical modern JavaScript project involves multiple processing steps, each adding to build time. First comes transpiling ES6+ syntax for browser compatibility, followed by converting TypeScript or Flow to JavaScript. Then there\u0026rsquo;s the transformation of JSX into React function calls, bundling of hundreds or thousands of modules, minifying the resulting code, and finally generating source maps. For large projects, this full process can take minutes—a real drag on development velocity.\nThe Compiled Contenders The key insight behind both esbuild and SWC is that the bottleneck isn\u0026rsquo;t just the algorithms—it\u0026rsquo;s the implementation language. By rewriting these tools in compiled, high-performance languages like Go and Rust, they achieve dramatic speed improvements.\nesbuild: The Go-powered Bundler esbuild, created by Evan Wallace (co-founder of Figma), is written in Go and positions itself as a JavaScript bundler and minifier. Its headline feature is raw speed: it claims to be 10-100x faster than competing tools.\nHow fast is it in practice? On my moderately sized React application, a full production build that took 45 seconds with Webpack completes in just under 3 seconds with esbuild. That\u0026rsquo;s not a typo—it\u0026rsquo;s more than an order of magnitude faster.\nThis performance doesn\u0026rsquo;t come from magic but from several thoughtful design decisions. Written in Go, esbuild leverages the language\u0026rsquo;s strong concurrency support through a parallelized architecture that puts multiple CPU cores to work simultaneously. It carefully avoids unnecessary file system operations and includes highly optimized parsing and code generation routines, all contributing to its impressive speed.\nFeature-wise, esbuild supports what you\u0026rsquo;d expect from a modern bundler: ES6 and CommonJS modules, tree shaking to eliminate dead code, JSX transformation, TypeScript compilation (with some limitations), and source map generation. However, it does have limitations—it doesn\u0026rsquo;t support certain Babel plugins or custom transformations, and its TypeScript support doesn\u0026rsquo;t include type checking.\nSWC: Rust Comes to JavaScript Transpilation SWC (Speedy Web Compiler) takes a different approach. Created by DongYoon Kang, SWC focuses primarily on being a transpiler—a direct alternative to Babel—though it also includes bundling capabilities. Written in Rust, it boasts similar performance claims to esbuild, advertising itself as 20x faster than Babel.\nSWC\u0026rsquo;s feature set targets compatibility with existing workflows. It handles ES6+ to ES5 transformation, TypeScript and Flow transpilation, JSX conversion, and minification. It also provides a custom plugin system, though the API is still evolving. What makes SWC particularly interesting is its compatibility goals. Where esbuild sometimes prioritizes performance over feature parity, SWC aims to be a drop-in replacement for Babel while maintaining its speed advantage.\nFramework Integrations Vite and esbuild Vite, a build tool created by Evan You (creator of Vue.js), leverages esbuild for its dev server. Vite 2, released Feb this year, embraces the idea of using native ES modules in the browser during development and only bundling for production.\nThe developer experience is transformative. Changes appear in the browser almost instantly, with no noticeable bundling delay. This is the kind of workflow improvement that can meaningfully change how developers interact with their code.\nNext.js and Experiments with SWC The Next.js team has been moving to SWC integration to replace Babel in their build pipeline. While not fully rolled out yet, the experiments show promising results, with build times reduced by 2x in some cases.\nThis isn\u0026rsquo;t just about raw speed—it\u0026rsquo;s about enabling workflows that weren\u0026rsquo;t previously practical. Imagine Next.js builds completing in seconds rather than minutes, allowing for more frequent deployments and tighter feedback loops.\nThe Migration Challenges First is the plugin ecosystem issue. Both Webpack and Babel have rich plugin ecosystems built over years. If your workflow depends on specific plugins, you may find that neither esbuild nor SWC supports them yet. A senior developer at a fintech company recently told me: \u0026ldquo;We were excited about the speed gains, but we had to keep parts of our Babel pipeline for our custom transformations. We\u0026rsquo;re gradually moving those over as SWC\u0026rsquo;s plugin system matures.\u0026rdquo;\nConfiguration complexity presents another challenge. esbuild\u0026rsquo;s configuration is simpler than Webpack\u0026rsquo;s, which can be both a blessing and a curse. If you\u0026rsquo;ve invested heavily in a complex Webpack configuration that handles various edge cases, migrating might require rethinking your build architecture.\nTypeScript integration remains partial with these tools. Both handle TypeScript transpilation, but neither performs type checking. You\u0026rsquo;ll still need to run the TypeScript compiler separately for that, typically as part of your CI/CD pipeline.\nFinally, there\u0026rsquo;s the question of production readiness. While many are adopting these tools for development environments, some teams remain cautious about production deployments. The tools are still evolving, and edge cases continue to be discovered and addressed.\nPerformance Implications for Large Projects The impact of these tools is most profound on larger projects. I spoke with a team lead at an e-commerce platform who switched their development environment to use Vite with esbuild:\n\u0026ldquo;Our app has over 500 components and thousands of modules(!). Webpack hot reloding was taking 8-10 seconds, which doesn\u0026rsquo;t sound like much until you\u0026rsquo;re doing it 100 times a day. With Vite and esbuild, changes reflect almost instantly. Our developers say it\u0026rsquo;s like having a weight lifted off their workflow.\u0026rdquo;\nThe benefits extend beyond the development feedback loop. CI/CD pipelines that previously took 15-20 minutes to build and test can now complete in a fraction of that time, allowing for more frequent deployments and faster bug fixes.\nLooking Forward: The Compiled Future What we\u0026rsquo;re witnessing, is likely just the beginning of a broader trend. The success of esbuild and SWC demonstrates that there\u0026rsquo;s significant performance potential in reimplementing JavaScript tooling in compiled languages.\nWe\u0026rsquo;re already seeing new projects emerge in this space. Deno, a secure JavaScript/TypeScript runtime built in Rust, takes a similar approach to performance optimization. Rome, an ambitious JavaScript toolchain also written in Rust, is in early development but promises to unify many development tools into a cohesive, high-performance package.\nThe JavaScript ecosystem has always been characterized by rapid evolution, and this shift toward compiled tooling represents an important inflection point. Just as we saw improvements when moving from old friend task runners like Grunt to more modern sophisticated bundlers like Webpack, this new generation of tools show promises to eliminate many of the performance pain points that have become accepted as the cost of modern JavaScript development.\n","permalink":"/posts/2021-09-29-esbuild-swc-rise-build-tools//posts/2021-09-29-esbuild-swc-rise-build-tools/","summary":"\u003ch2 id=\"the-status-quo-and-its-discontents\"\u003eThe Status Quo and Its Discontents\u003c/h2\u003e\n\u003cp\u003eBefore diving into these new tools, let\u0026rsquo;s consider where we are. For years, Webpack has dominated the bundling landscape, with tools like Babel handling transpilation. These tools, written in JavaScript, have served us well, but as projects grow larger and demands for features increase, their performance limitations have become increasingly apparent.\u003c/p\u003e\n\u003cp\u003eA typical modern JavaScript project involves multiple processing steps, each adding to build time. First comes transpiling ES6+ syntax for browser compatibility, followed by converting TypeScript or Flow to JavaScript. Then there\u0026rsquo;s the transformation of JSX into React function calls, bundling of hundreds or thousands of modules, minifying the resulting code, and finally generating source maps. For large projects, this full process can take minutes—a real drag on development velocity.\u003c/p\u003e","title":"Go and Rust bring unprecedented speed to JavaScript bundling and transpilation"},{"content":"React 18 is on the horizon, and it\u0026rsquo;s bringing some of the most significant changes to the library since hooks were introduced. As the alpha version has hit GitHub and the React team has shared their plans, there\u0026rsquo;s a lot to unpack and prepare for. Having spent the last few weeks digging through discussions and experimenting with the alpha, I\u0026rsquo;m excited to share what these changes mean for our codebases.\nUnderstanding Concurrent React: It\u0026rsquo;s Not a Mode Anymore One of the first things to understand about React 18 is an important shift in terminology. \u0026ldquo;Concurrent Mode,\u0026rdquo; which many of us have been hearing about for years, is being retired. Instead, we\u0026rsquo;re getting \u0026ldquo;Concurrent React\u0026rdquo; - and the distinction matters.\nConcurrency is no longer something you opt into at the root of your application. Instead, it\u0026rsquo;s a behind-the-scenes mechanism that powers new features you can gradually adopt. This is fantastic news for incremental adoption, as it means we won\u0026rsquo;t need to rewrite our apps to take advantage of these improvements.\nWhat Makes Concurrent Rendering Special? So what exactly is concurrent rendering? In essence, it\u0026rsquo;s React\u0026rsquo;s ability to prepare multiple versions of your UI at the same time. The current React rendering model is synchronous and blocking - once it starts rendering, it doesn\u0026rsquo;t stop until it\u0026rsquo;s done, which can cause performance issues with complex UI updates.\nWith concurrent rendering, React can start rendering, pause, and continue later. It can even abandon a render altogether if a more urgent update comes in. This interruptible rendering gives React the ability to execute multiple tasks simultaneously, enabling it to:\nKeep your app responsive during intensive rendering tasks Show content as soon as it\u0026rsquo;s ready instead of waiting for everything Avoid showing loading states that appear for only a brief moment (avoiding \u0026ldquo;layout thrashing\u0026rdquo;) This capability underpins all the exciting features coming in React 18 and represents a fundamental improvement to React\u0026rsquo;s rendering approach.\nThe New Features Powered by Concurrency Automatic Batching Currently, React batches state updates inside event handlers but not in promises, timeouts, or native event handlers. In React 18, batching becomes more consistent:\n// React 17: these setState calls trigger two separate renders fetch(\u0026#39;/api/data\u0026#39;).then(() =\u0026gt; { setCount(c =\u0026gt; c + 1); setFlag(f =\u0026gt; !f); }); // React 18: these will be automatically batched into one render fetch(\u0026#39;/api/data\u0026#39;).then(() =\u0026gt; { setCount(c =\u0026gt; c + 1); setFlag(f =\u0026gt; !f); }); This improvement alone could offer noticeable performance benefits for many applications with no code changes required.\nThe Transition API Perhaps the most immediately useful concurrent feature is the new startTransition API. It allows you to mark UI updates as \u0026ldquo;transitions,\u0026rdquo; which React treats as lower priority.\nConsider a search input that filters a large list. With traditional rendering, typing quickly would cause the UI to freeze as React struggles to update the filtered results for each keystroke:\n// Without transitions, this can block the UI function handleChange(e) { setSearchQuery(e.target.value); setFilteredResults(filterByQuery(e.target.value)); } With transitions, we can prioritize the input\u0026rsquo;s responsiveness:\n// With transitions, input remains responsive function handleChange(e) { // Urgent: Show what the user typed setSearchQuery(e.target.value); // Mark filtering as a transition (lower priority) startTransition(() =\u0026gt; { setFilteredResults(filterByQuery(e.target.value)); }); } For my clients with data-heavy dashboards, this is going to be a game-changer. The ability to keep inputs responsive while handling expensive updates in the background solves a pain point we\u0026rsquo;ve been working around with debouncing and other techniques for years.\nSuspense: The Evolution from Legacy to React 18 Suspense has been around since React 16.6, but it\u0026rsquo;s undergoing significant improvements in React 18. To understand what\u0026rsquo;s changing, let\u0026rsquo;s look at how Suspense currently works versus its new implementation.\nLegacy Suspense (React 16/17) In the current version of React, Suspense was primarily designed for code-splitting with React.lazy(). When used for other purposes like data fetching, it had some quirky behavior:\nWhen a component inside a Suspense boundary suspended, React would put the DOM in an inconsistent state temporarily. React would hide this inconsistency with display: hidden and show the fallback. Most importantly, sibling components continued rendering even when a component suspended. This approach was described by the React team as \u0026ldquo;a bit weird, but a good compromise for introducing Suspense in a backwards compatible way.\u0026rdquo;\nAdditionally, in React 17, a Suspense boundary with no fallback would be skipped entirely, which could lead to confusing debugging scenarios.\n// Legacy Suspense behavior function ProfilePage() { return ( \u0026lt;div\u0026gt; \u0026lt;Suspense fallback={\u0026lt;Spinner /\u0026gt;}\u0026gt; \u0026lt;ProfileHeader /\u0026gt; {/* If this suspends */} \u0026lt;/Suspense\u0026gt; \u0026lt;Suspense fallback={\u0026lt;Spinner /\u0026gt;}\u0026gt; \u0026lt;ProfileTimeline /\u0026gt; {/* This still renders! */} \u0026lt;/Suspense\u0026gt; \u0026lt;/div\u0026gt; ); } Concurrent Suspense (React 18) With React 18, Suspense becomes a first-class part of the rendering model:\nWhen a component suspends, React now interrupts siblings and prevents them from committing until data is ready. The mental model is similar to how try/catch works in JavaScript - the closest Suspense boundary above \u0026ldquo;catches\u0026rdquo; the suspending component, no matter how many components are in between. Suspense boundaries will be respected even if the fallback is null or undefined, making behavior more predictable. // React 18 Suspense behavior function ProfilePage() { return ( \u0026lt;div\u0026gt; \u0026lt;Suspense fallback={\u0026lt;PageGlimmer /\u0026gt;}\u0026gt; \u0026lt;ProfileHeader /\u0026gt; {/* If this suspends */} \u0026lt;Suspense fallback={\u0026lt;LeftColumnGlimmer /\u0026gt;}\u0026gt; \u0026lt;Comments /\u0026gt; {/* These components don\u0026#39;t render until ProfileHeader is ready */} \u0026lt;Photos /\u0026gt; \u0026lt;/Suspense\u0026gt; \u0026lt;/Suspense\u0026gt; \u0026lt;/div\u0026gt; ); } In this example, if ProfileHeader suspends, the entire page will be replaced with PageGlimmer. However, if either Comments or Photos suspend, they together will be replaced with LeftColumnGlimmer. This lets you design Suspense boundaries that match your visual UI layout.\nThe new model is more intuitive and consistent with how developers expect components to behave. It also enables more sophisticated UIs that can reveal content progressively as it becomes available, rather than waiting for everything or showing too many spinners.\nThis revamped Suspense system works hand-in-hand with concurrent rendering to deliver a more responsive user experience, especially when dealing with data fetching and dynamic content.\nPreparing Your Codebase for React 18 While React 18 is still in alpha, there are steps we can take to prepare our applications:\n1. Identify Blocking Renders Start identifying parts of your application where rendering blocks the main thread. User interactions that trigger heavy updates are prime candidates for transitions once React 18 is available. For now, consider using techniques like debouncing and virtualizing long lists.\n2. Clean Up Effects React 18\u0026rsquo;s concurrency model means components might mount and unmount multiple times without being visible to users (as React tries different UI states). Review your useEffect cleanup functions to ensure they properly cancel subscriptions, abort fetches, and clean up resources.\n// Make sure your effects have proper cleanup useEffect(() =\u0026gt; { const controller = new AbortController(); fetch(\u0026#39;/api/data\u0026#39;, { signal: controller.signal }) .then(/* ... */); return () =\u0026gt; { controller.abort(); // This is crucial in React 18 }; }, []); 3. Fix Strict Mode Warnings If you haven\u0026rsquo;t already, enable React\u0026rsquo;s Strict Mode. React 18 will build on Strict Mode to help identify components that aren\u0026rsquo;t prepared for concurrent rendering. Fix any existing warnings now to make the eventual migration smoother.\n4. Follow the Working Group The React 18 Working Group is hosting discussions on GitHub, and they\u0026rsquo;re publicly available to read. This is a goldmine of information about upcoming changes and best practices. Members of the working group are actively sharing feedback, asking questions, and contributing ideas that are shaping the final release. Following these discussions gives you early insight into how the APIs are evolving.\n5. Consider Server Components While not directly part of React 18, the React team is also working on Server Components, which will complement the concurrent rendering model. They allow components to run on the server without JavaScript overhead on the client. Start thinking about which parts of your app could benefit from this pattern.\nExcitements After experimenting with the alpha and examining real-world use cases, I\u0026rsquo;m convinced this is steps to the right direction.\nOne of my projects is a complex dashboard with multiple data visualizations that update based on filters. Currently, when users change filters, there\u0026rsquo;s a noticeable lag as all the visualizations update simultaneously. With transitions, we\u0026rsquo;ll be able to keep the UI responsive while updating these visualizations with a much smoother experience.\nThe React team is also clearly prioritizing incremental adoption. Unlike the jump to Hooks, which required significant mental model shifts, concurrent features can be adopted gradually, with each step providing tangible benefits.\nLooking Ahead React 18 represents a significant evolution, not just in performance but in how we think about rendering UI. The concurrent features provide new tools for creating responsive interfaces without sacrificing the declarative programming model we love about React.\nWhile we don\u0026rsquo;t have a specific release date yet, the alpha is available for experimentation, and the APIs are stabilizing. For those of us building complex applications, it\u0026rsquo;s worth starting to prepare now, both mentally and in our codebases.\nWant to learn more about React 18? Check out the official announcement post and join the discussions on GitHub to stay updated on the latest developments.\n","permalink":"/posts/2021-08-14-react-18-alpha//posts/2021-08-14-react-18-alpha/","summary":"\u003cp\u003eReact 18 is on the horizon, and it\u0026rsquo;s bringing some of the most significant changes to the library since hooks were introduced. As the alpha version has hit GitHub and the React team has shared their plans, there\u0026rsquo;s a lot to unpack and prepare for. Having spent the last few weeks digging through discussions and experimenting with the alpha, I\u0026rsquo;m excited to share what these changes mean for our codebases.\u003c/p\u003e","title":"Gearing Up for React 18: Concurrent React"},{"content":"As web development evolves, many developers find themselves wanting to migrate their Create React App (CRA) projects to Next.js to take advantage of features like server-side rendering, static site generation, and improved routing. Today, I\u0026rsquo;m breaking down an exciting new experimental codemod tool that has recently landed in the @next/codemod package designed to automate this migration process.\nExperimental CRA to Next.js Migration Tool This transformation has recently been added to the official @next/codemod package as an experimental feature, making it easier than ever to migrate existing CRA applications to Next.js. Being experimental, it may continue to evolve as the Next.js team gathers feedback from early adopters.\nWhat is a Codemod? Before diving in, let\u0026rsquo;s clarify what a codemod is: it\u0026rsquo;s an automated tool that helps transform source code from one format to another. In this case, the newly introduced experimental codemod helps transform a CRA project into a Next.js project by automatically updating file structures, configurations, and code patterns.\nCodemod Overview This newly added experimental codemod is a sophisticated transformer designed to handle the complex task of migrating a CRA project to Next.js. At its core, it:\nAnalyzes your CRA project structure Transforms key files to be Next.js compatible Creates necessary Next.js configuration files Updates package dependencies Sets up the pages directory with proper components Let\u0026rsquo;s explore how this experimental feature works in detail.\nProject Analysis and Validation When you run the experimental codemod, it first validates your project directory and determines if it\u0026rsquo;s a proper CRA project by looking for react-scripts in your dependencies. It also detects if you\u0026rsquo;re using TypeScript by checking for a tsconfig.json file and .ts/.tsx files in your project.\nthis.isCra = hasDep(\u0026#39;react-scripts\u0026#39;) this.isVite = !this.isCra \u0026amp;\u0026amp; hasDep(\u0026#39;vite\u0026#39;) if (!this.isCra \u0026amp;\u0026amp; !this.isVite) { fatalMessage(`Error: react-scripts was not detected, is this a CRA project?`) } this.shouldUseTypeScript = fs.existsSync(path.join(this.appDir, \u0026#39;tsconfig.json\u0026#39;)) || globby.sync(\u0026#39;src/**/*.{ts,tsx}\u0026#39;, { cwd: path.join(this.appDir, \u0026#39;src\u0026#39;), }).length \u0026gt; 0 Transforming the Entry Point One of the most critical transformations in this experimental codemod is converting your CRA entry point (src/index.js or src/main.js) into a React component that can be rendered within Next.js. The codemod uses a custom transformation to convert ReactDOM.render calls into a proper component:\nconst indexTransformRes = await runJscodeshift( indexTransformPath, { ...this.jscodeShiftFlags, silent: true, verbose: 0 }, [path.join(this.appDir, \u0026#39;src\u0026#39;, this.indexPage)] ) It also performs safety checks to ensure your app doesn\u0026rsquo;t have multiple render roots or nested renders, which would need manual intervention.\nHandling Global CSS Next.js handles CSS differently than CRA. The experimental codemod identifies global CSS imports in your application and moves them to the appropriate location:\nconst globalCssRes = await runJscodeshift( globalCssTransformPath, { ...this.jscodeShiftFlags }, [this.appDir] ) These CSS imports will be added to the _app.js file, which is the entry point for Next.js applications.\nCreating the Next.js Pages Structure Next.js uses a file-system based router with a /pages directory. The experimental codemod creates this directory and populates it with essential files:\n_app.js: The application wrapper that includes global styles and layout elements _document.js: Customizes the HTML document structure [[...slug]].js: A catch-all route that imports your transformed CRA application The codemod extracts elements from your public/index.html file and distributes them to the appropriate Next.js files:\n// For _app.js ${ titleTag || metaViewport ? `return ( \u0026lt;\u0026gt; \u0026lt;Head\u0026gt; ${titleTag ? `\u0026lt;title\u0026gt;...\u0026lt;/title\u0026gt;` : \u0026#39;\u0026#39;} ${metaViewport ? `\u0026lt;meta... /\u0026gt;` : \u0026#39;\u0026#39;} \u0026lt;/Head\u0026gt; \u0026lt;Component {...pageProps} /\u0026gt; \u0026lt;/\u0026gt; )` : \u0026#39;return \u0026lt;Component {...pageProps} /\u0026gt;\u0026#39; } Package.json Updates The experimental codemod updates your package.json to replace CRA dependencies with Next.js dependencies:\n// Removes react-scripts const packagesToRemove = { [packageName]: undefined, } // Adds next const newDependencies = [ { name: \u0026#39;next\u0026#39;, version: \u0026#39;latest\u0026#39;, }, ] It also transforms your npm scripts, converting react-scripts start to next dev, react-scripts build to next build, etc.\nNext.js Configuration The experimental codemod creates a next.config.js file with settings to ensure compatibility with your CRA project:\nmodule.exports = { // Preserves proxy settings from CRA async rewrites() { ... }, // Preserves PUBLIC_URL environment variable env: { PUBLIC_URL: \u0026#39;${homepagePath === \u0026#39;/\u0026#39; ? \u0026#39;\u0026#39; : homepagePath || \u0026#39;\u0026#39;}\u0026#39; }, // Enables CRA compatibility mode experimental: { craCompat: true, }, // Preserves CRA\u0026#39;s image handling images: { disableStaticImages: true } } Client-Side Rendering Fallback To handle potential issues with server-side rendering (SSR), the experimental codemod creates a dynamic import for your application with SSR disabled by default:\nimport dynamic from \u0026#39;next/dynamic\u0026#39; const NextIndexWrapper = dynamic(() =\u0026gt; import(\u0026#39;${relativeIndexPath}\u0026#39;), { ssr: false }) export default function Page(props) { return \u0026lt;NextIndexWrapper {...props} /\u0026gt; } This prevents common errors when migrating from CRA, which assumes client-side rendering only, to Next.js which supports SSR.\nUsing the Experimental Codemod To use this newly added experimental codemod, you can install @next/codemod and run the transformation:\nnpx @next/codemod cra-to-next ./your-cra-app The codemod will analyze your project and perform the transformation steps outlined above.\nLimitations and Experimental Status As this is a recently added experimental feature in @next/codemod, there are some limitations to be aware of:\nIt doesn\u0026rsquo;t support projects with multiple ReactDOM.render calls SVG imports using the {ReactComponent} syntax aren\u0026rsquo;t supported Some browser-only code might need manual adjustments Complex webpack configurations may require additional tweaking Being an experimental feature, you might encounter edge cases that haven\u0026rsquo;t been fully addressed yet. The Next.js team is actively collecting feedback to improve this codemod.\nConclusion This recently added experimental codemod in the @next/codemod package significantly reduces the manual work required to migrate from CRA to Next.js. It handles the structural changes, configuration updates, and common code transformations automatically while providing helpful error messages when it encounters issues it can\u0026rsquo;t resolve.\nIf you\u0026rsquo;re considering migrating your CRA project to Next.js, this experimental codemod is an excellent starting point that will save you hours of manual migration work. After running it, you\u0026rsquo;ll have a functioning Next.js application that preserves your original CRA functionality while enabling you to gradually adopt Next.js features. Being an experimental feature, you should thoroughly test your application after migration and may need to make manual adjustments for more complex use cases.\n","permalink":"/posts/2021-07-26-experimental-cra-to-next-codemod//posts/2021-07-26-experimental-cra-to-next-codemod/","summary":"\u003cp\u003eAs web development evolves, many developers find themselves wanting to migrate their Create React App (CRA) projects to Next.js to take advantage of features like server-side rendering, static site generation, and improved routing. Today, I\u0026rsquo;m breaking down an exciting new experimental codemod tool that has \u003ca href=\"https://github.com/vercel/next.js/pull/24969\"\u003erecently landed in the \u003ccode\u003e@next/codemod\u003c/code\u003e package\u003c/a\u003e designed to automate this migration process.\u003c/p\u003e\n\u003ch2 id=\"experimental-cra-to-nextjs-migration-tool\"\u003eExperimental CRA to Next.js Migration Tool\u003c/h2\u003e\n\u003cp\u003eThis transformation has recently been added to the official \u003ccode\u003e@next/codemod\u003c/code\u003e package as an experimental feature, making it easier than ever to migrate existing CRA applications to Next.js. Being experimental, it may continue to evolve as the Next.js team gathers feedback from early adopters.\u003c/p\u003e","title":"Learning from the New Experimental CRA to Next.js Migration Codemod"},{"content":"The State of E2E Testing E2E testing has traditionally been a headache. Selenium tests are notoriously brittle, and while unit or integration tests catch a lot, they often miss real-world user flows. Cypress helped change that narrative. With its clean API and time-travel debugging, it quickly became the go-to for many developers.\nThat said, Playwright has been picking up steam—especially with teams that need serious cross-browser support. Released by Microsoft not too long ago, it takes lessons from older tools and builds on them, filling gaps that Cypress hasn’t fully addressed yet.\nArchitecture: Inside vs. Outside the Browser The biggest difference between Cypress and Playwright lies in how they interact with the browser—and that affects what kinds of tests you can write.\nCypress runs right inside the browser, in the same JavaScript context as your app:\n// Cypress example cy.visit(\u0026#39;/dashboard\u0026#39;) cy.get(\u0026#39;.user-name\u0026#39;).should(\u0026#39;contain\u0026#39;, \u0026#39;John\u0026#39;) cy.get(\u0026#39;.logout-button\u0026#39;).click() cy.url().should(\u0026#39;include\u0026#39;, \u0026#39;/login\u0026#39;) This gives Cypress deep access to your app’s internals, which enables features like time-travel debugging and fine-grained control. The downside? It’s subject to browser security rules, which makes things like testing multiple tabs or cross-origin behavior pretty tricky.\nPlaywright, on the flip side, drives the browser from the outside:\n// Playwright example await page.goto(\u0026#39;/dashboard\u0026#39;); await expect(page.locator(\u0026#39;.user-name\u0026#39;)).toContainText(\u0026#39;John\u0026#39;); await page.click(\u0026#39;.logout-button\u0026#39;); await expect(page).toHaveURL(/login/); Because it operates outside the browser context, Playwright can handle multiple tabs, different domains, and full browser automation—something Cypress struggles with.\nBrowser Support: A Key Differentiator For a lot of teams, browser support is where these tools really start to diverge.\nCypress mainly targets Chrome and other Chromium-based browsers. There’s experimental support for Firefox, and they’ve promised more cross-browser options—but it’s not quite there yet.\nPlaywright, on the other hand, supports Chromium, Firefox, and WebKit (which means approx. Safari) right out of the box. That’s a big deal for teams that need to ensure full browser coverage.\nDeveloper Experience and Debugging One of Cypress’s strongest points is its developer experience. The interactive test runner gives you real-time feedback, letting you inspect the DOM, app state, and network calls as tests run.\n// Cypress makes debugging a breeze cy.get(\u0026#39;.data-table\u0026#39;).within(() =\u0026gt; { cy.get(\u0026#39;tr\u0026#39;).should(\u0026#39;have.length\u0026#39;, 10) .then($rows =\u0026gt; { console.log($rows); // Dive right into the elements }); }); Playwright takes a different approach. Instead of a live UI, it offers a trace viewer that records screenshots, DOM snapshots, and network activity during test runs. It’s not as immediate as Cypress, but the amount of detail it captures is super helpful when you’re debugging tricky failures.\n// Playwright debugging with trace await context.tracing.start({ screenshots: true, snapshots: true }); await page.goto(\u0026#39;/complex-page\u0026#39;); await page.click(\u0026#39;.tricky-element\u0026#39;); await context.tracing.stop({ path: \u0026#39;trace.zip\u0026#39; }); // Review later with: npx playwright show-trace trace.zip Scaling and CI Integration Once your test suite grows, speed becomes a real concern—especially in CI.\nCypress offers test parallelization through its Dashboard (which is free for small teams, paid for larger ones). The newer versions (like 7.0 and up) have improved spec organization and configuration flexibility too.\nPlaywright, meanwhile, includes parallelization out of the box:\n// Native parallel execution with Playwright npx playwright test --workers=5 This built-in capability gives Playwright an edge for teams trying to shave time off their CI pipeline without paying for extra services.\nEcosystem and Community Ecosystem matters. Cypress has been around longer and has a mature plugin system, strong community, and tons of shared knowledge. If you run into a common problem, chances are there’s already a plugin or workaround for it.\nPlaywright’s ecosystem is still growing, but it’s catching up fast—thanks in part to Microsoft’s backing. Its core package includes a lot of functionality that would require plugins in Cypress, which means fewer dependencies and more predictable behavior.\nSo, Which One Should You Pick? After working hands-on with both, we’ve found the right choice usually depends on your project’s priorities:\nCypress if:\nYou want a batteries-included developer experience You mainly target Chrome/Chromium browsers You value a mature community and plugin ecosystem Your app doesn’t rely on complex multi-tab or cross-domain flows Playwright if:\nCross-browser support (especially Safari) is non-negotiable Your app involves multiple domains, tabs, or more advanced scenarios You need scalable parallelization without extra tools You’re starting fresh and want modern features from day one Some teams I’ve worked with use both—Cypress for quick, dev-friendly tests during development, and Playwright for heavier-duty, cross-browser E2E testing in CI.\nLooking Ahead Both tools are moving fast. Cypress is actively working on architectural updates to support broader browser testing. Playwright keeps refining its APIs and tooling. The competition is pushing both to get better—and that’s great for the rest of us.\nA recent chat with an EM friend: “A couple of years ago, we couldn’t get web engs to write REAL e2e tests. Now we’re debating the finer points between two great tools. That’s a win.”\n","permalink":"/posts/2021-05-10-cypress-playwright-landscape//posts/2021-05-10-cypress-playwright-landscape/","summary":"\u003ch2 id=\"the-state-of-e2e-testing\"\u003eThe State of E2E Testing\u003c/h2\u003e\n\u003cp\u003eE2E testing has traditionally been a headache. Selenium tests are notoriously brittle, and while unit or integration tests catch a lot, they often miss real-world user flows. Cypress helped change that narrative. With its clean API and time-travel debugging, it quickly became the go-to for many developers.\u003c/p\u003e\n\u003cp\u003eThat said, Playwright has been picking up steam—especially with teams that need serious cross-browser support. Released by Microsoft not too long ago, it takes lessons from older tools and builds on them, filling gaps that Cypress hasn’t fully addressed yet.\u003c/p\u003e","title":"Cypress vs. Playwright: The Evolving Landscape of Modern E2E Testing"},{"content":"The Code Splitting Dilemma The React team recognized bundle size challenges by introducing React.lazy() and Suspense for component-based code splitting. This native solution works great for client-side rendering, but falls apart when you need SSR:\n// This works fine with client-side rendering const LazyComponent = React.lazy(() =\u0026gt; import(\u0026#39;./LazyComponent\u0026#39;)); function MyComponent() { return ( \u0026lt;Suspense fallback={\u0026lt;div\u0026gt;Loading...\u0026lt;/div\u0026gt;}\u0026gt; \u0026lt;LazyComponent /\u0026gt; \u0026lt;/Suspense\u0026gt; ); } If you\u0026rsquo;re using Next.js, Gatsby, or any other SSR solution, the code above will throw the dreaded error: \u0026ldquo;Suspense is not supported during server-side rendering.\u0026rdquo;\n@loadable/component shines @loadable/component was originally created as a higher-level abstraction over React.lazy(), it has evolved into the go-to solution for universal code splitting in React applications.\nInstalling is straightforward:\nnpm install @loadable/component # or yarn add @loadable/component The basic usage mirrors React.lazy(), but with SSR compatibility:\nimport loadable from \u0026#39;@loadable/component\u0026#39;; const LazyComponent = loadable(() =\u0026gt; import(\u0026#39;./LazyComponent\u0026#39;), { fallback: \u0026lt;div\u0026gt;Loading...\u0026lt;/div\u0026gt; }); function MyComponent() { // No Suspense wrapper needed! return \u0026lt;LazyComponent /\u0026gt;; } Why I Migrated from React.lazy() After implementing @loadable/component, I discovered several additional benefits:\nLibrary-level loading indicators: The fallback is specified at the component declaration level, reducing boilerplate.\nEnhanced prefetching capabilities: Unlike React.lazy(), @loadable/component gives you fine-grained control over when to prefetch components:\nconst LazyComponent = loadable(() =\u0026gt; import(\u0026#39;./LazyComponent\u0026#39;)); // Prefetch on demand const handlePrefetch = () =\u0026gt; { LazyComponent.preload(); }; Dynamic loading with props: You can pass variables into your dynamic imports, enabling context-dependent code loading: const LazyComponent = loadable(props =\u0026gt; import(`./components/${props.componentName}`) ); // Usage \u0026lt;LazyComponent componentName=\u0026#34;Dashboard\u0026#34; /\u0026gt; Fine-grained Control with Gatsby For those using Gatsby, @loadable/component offers particularly powerful optimizations. As highlighted in Gatsby\u0026rsquo;s documentation, without code splitting, all components will be included in every page bundle—even those that aren\u0026rsquo;t used on the current page.\nConsider this example from a Gatsby-based CMS project:\n// components/ContentBlocks.js import loadable from \u0026#39;@loadable/component\u0026#39;; // Instead of direct imports: // import Carousel from \u0026#39;./blocks/Carousel\u0026#39;; // import VideoPlayer from \u0026#39;./blocks/VideoPlayer\u0026#39;; // import TestimonialCard from \u0026#39;./blocks/TestimonialCard\u0026#39;; // Use loadable components: const Carousel = loadable(() =\u0026gt; import(\u0026#39;./blocks/Carousel\u0026#39;)); const VideoPlayer = loadable(() =\u0026gt; import(\u0026#39;./blocks/VideoPlayer\u0026#39;)); const TestimonialCard = loadable(() =\u0026gt; import(\u0026#39;./blocks/TestimonialCard\u0026#39;)); export const getBlockComponent = (blockType) =\u0026gt; { switch(blockType) { case \u0026#39;carousel\u0026#39;: return Carousel; case \u0026#39;video\u0026#39;: return VideoPlayer; case \u0026#39;testimonial\u0026#39;: return TestimonialCard; default: return null; } }; This approach ensures that if a page only uses testimonials, the code for carousels and video players won\u0026rsquo;t be included in the bundle.\nIntegration with Popular Frameworks Next.js For a full SSR setup with Next.js, you\u0026rsquo;ll need the server modules:\nnpm install @loadable/component @loadable/server @loadable/babel-plugin @loadable/webpack-plugin Then update your Next.js configuration:\n// next.config.js const LoadablePlugin = require(\u0026#39;@loadable/webpack-plugin\u0026#39;); module.exports = { webpack: (config, options) =\u0026gt; { config.plugins.push(new LoadablePlugin()); return config; }, }; Gatsby Gatsby works well with the basic @loadable/component package, as demonstrated in the earlier example. The performance improvements are particularly noticeable in content-rich sites where different pages use widely varying components.\nLoading State Best Practices Rather than displaying a generic spinner, consider skeleton screens that match your component\u0026rsquo;s final layout:\nconst ArticleComponent = loadable(() =\u0026gt; import(\u0026#39;./Article\u0026#39;), { fallback: \u0026lt;ArticleSkeleton /\u0026gt; }); // ArticleSkeleton.js const ArticleSkeleton = () =\u0026gt; ( \u0026lt;div className=\u0026#34;article-skeleton\u0026#34;\u0026gt; \u0026lt;div className=\u0026#34;skeleton-title\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;div className=\u0026#34;skeleton-metadata\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;div className=\u0026#34;skeleton-content\u0026#34;\u0026gt; \u0026lt;div className=\u0026#34;skeleton-line\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;div className=\u0026#34;skeleton-line\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;div className=\u0026#34;skeleton-line\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; ); This approach provides a more seamless user experience than abrupt loading indicators.\nPitfalls to Avoid Over-splitting: Don\u0026rsquo;t create loadable components for tiny components. Code-splitting has overhead, so focus on larger components or logical feature groups.\nIgnoring the critical path: Components visible above the fold should often be included in the main bundle.\nMissing error boundaries: Always wrap dynamically loaded components in error boundaries to prevent the entire application from crashing if a chunk fails to load.\nimport { ErrorBoundary } from \u0026#39;react-error-boundary\u0026#39;; function MyApp() { return ( \u0026lt;ErrorBoundary FallbackComponent={ErrorFallback} onError={handleError} \u0026gt; \u0026lt;LazyComponent /\u0026gt; \u0026lt;/ErrorBoundary\u0026gt; ); } Forecast Code splitting remains a crucial optimization technique. While React.lazy() may eventually gain SSR support, @loadable/component offers a robust solution today with an API that\u0026rsquo;s likely to remain compatible with future React updates.\n","permalink":"/posts/2021-04-30-code-splitting-loadable-component//posts/2021-04-30-code-splitting-loadable-component/","summary":"\u003ch2 id=\"the-code-splitting-dilemma\"\u003eThe Code Splitting Dilemma\u003c/h2\u003e\n\u003cp\u003eThe React team recognized bundle size challenges by introducing \u003ccode\u003eReact.lazy()\u003c/code\u003e and \u003ccode\u003eSuspense\u003c/code\u003e for component-based code splitting. This native solution works great for client-side rendering, but falls apart when you need SSR:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-jsx\" data-lang=\"jsx\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e// This works fine with client-side rendering\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"kr\"\u003econst\u003c/span\u003e \u003cspan class=\"nx\"\u003eLazyComponent\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"nx\"\u003eReact\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003elazy\u003c/span\u003e\u003cspan class=\"p\"\u003e(()\u003c/span\u003e \u003cspan class=\"p\"\u003e=\u0026gt;\u003c/span\u003e \u003cspan class=\"kr\"\u003eimport\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"s1\"\u003e\u0026#39;./LazyComponent\u0026#39;\u003c/span\u003e\u003cspan class=\"p\"\u003e));\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"kd\"\u003efunction\u003c/span\u003e \u003cspan class=\"nx\"\u003eMyComponent\u003c/span\u003e\u003cspan class=\"p\"\u003e()\u003c/span\u003e \u003cspan class=\"p\"\u003e{\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"k\"\u003ereturn\u003c/span\u003e \u003cspan class=\"p\"\u003e(\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"p\"\u003e\u0026lt;\u003c/span\u003e\u003cspan class=\"nt\"\u003eSuspense\u003c/span\u003e \u003cspan class=\"na\"\u003efallback\u003c/span\u003e\u003cspan class=\"o\"\u003e=\u003c/span\u003e\u003cspan class=\"p\"\u003e{\u0026lt;\u003c/span\u003e\u003cspan class=\"nt\"\u003ediv\u003c/span\u003e\u003cspan class=\"p\"\u003e\u0026gt;\u003c/span\u003e\u003cspan class=\"nx\"\u003eLoading\u003c/span\u003e\u003cspan class=\"p\"\u003e...\u0026lt;/\u003c/span\u003e\u003cspan class=\"nt\"\u003ediv\u003c/span\u003e\u003cspan class=\"p\"\u003e\u0026gt;}\u0026gt;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e      \u003cspan class=\"p\"\u003e\u0026lt;\u003c/span\u003e\u003cspan class=\"nt\"\u003eLazyComponent\u003c/span\u003e \u003cspan class=\"p\"\u003e/\u0026gt;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"p\"\u003e\u0026lt;/\u003c/span\u003e\u003cspan class=\"nt\"\u003eSuspense\u003c/span\u003e\u003cspan class=\"p\"\u003e\u0026gt;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"p\"\u003e);\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e}\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eIf you\u0026rsquo;re using Next.js, Gatsby, or any other SSR solution, the code above will throw the dreaded error: \u0026ldquo;\u003cstrong\u003eSuspense is not supported during server-side rendering\u003c/strong\u003e.\u0026rdquo;\u003c/p\u003e","title":"Code Splitting with @loadable/component"},{"content":"Progressive Web Apps have been with us for a while now, transforming the way we think about web applications and blurring the line between native and web experiences. But despite the technology\u0026rsquo;s maturity, I\u0026rsquo;ve found that many developers continue to overlook one of the most powerful PWA features: offline support. And according to recent announcements from the Chrome team, this oversight is about to become a much bigger problem.\nThe coming offline requirement If you\u0026rsquo;ve deployed a PWA recently and checked your console, you might have noticed a new warning: \u0026ldquo;Page does not work offline.\u0026rdquo; This isn\u0026rsquo;t just a friendly reminder—it\u0026rsquo;s a harbinger of significant changes. Starting with Chrome 93 (scheduled for release in August), Google will be changing their installability criteria to require offline functionality. Without it, your PWA simply won\u0026rsquo;t be installable.\nThis change represents a fundamental shift in how Google views PWAs. The message is clear: a proper PWA should work regardless of network conditions, not just act as a glorified bookmark. And honestly, they\u0026rsquo;re right.\nWhy offline support matters (beyond satisfying Google) When I first implemented offline support in a client\u0026rsquo;s e-commerce PWA last quarter, the benefits went far beyond mere compliance:\nUser retention: Network interruptions no longer meant losing users Improved perceived performance: Even with a spotty connection, the core experience remained intact Reduced server load: Many requests could be served from cache instead of hitting our backend Better metrics: Time-to-interactive improved dramatically, boosting our Lighthouse scores From rural areas with poor connectivity to subway commuters with intermittent signals, providing a consistent experience regardless of network conditions isn\u0026rsquo;t just good practice—it\u0026rsquo;s good business.\nImplementing proper offline support Let\u0026rsquo;s talk about how to make your PWA work offline. The foundational technology here is Service Workers, which act as a proxy between your application and the network.\nThere are two main approaches to caching for offline use:\n1. Cache-first strategy This approach tries to serve content from the cache first, falling back to the network only when necessary:\nself.addEventListener(\u0026#39;fetch\u0026#39;, event =\u0026gt; { event.respondWith( caches.match(event.request) .then(cachedResponse =\u0026gt; { return cachedResponse || fetch(event.request) .then(response =\u0026gt; { return caches.open(\u0026#39;my-cache\u0026#39;) .then(cache =\u0026gt; { cache.put(event.request, response.clone()); return response; }); }); }) ); }); This works well for static assets and content that doesn\u0026rsquo;t change frequently.\n2. Network-first with fallback Here you attempt to fetch from the network first, only using cached content when the network fails:\nself.addEventListener(\u0026#39;fetch\u0026#39;, event =\u0026gt; { event.respondWith( fetch(event.request) .catch(() =\u0026gt; { return caches.match(event.request); }) ); }); This is better for dynamic content where freshness matters more than speed.\nWorkbox: Your offline support secret weapon Writing service worker code from scratch isn\u0026rsquo;t fun. Seriously. I spent three days debugging race conditions in a custom implementation before discovering Workbox, Google\u0026rsquo;s library for service worker management.\nWorkbox abstracts away the complexities with an elegant API:\n// In service-worker.js importScripts(\u0026#39;https://storage.googleapis.com/workbox-cdn/releases/6.1.0/workbox-sw.js\u0026#39;); // Cache CSS, JS, and Web Worker files with a Stale-While-Revalidate strategy workbox.routing.registerRoute( ({request}) =\u0026gt; request.destination === \u0026#39;style\u0026#39; || request.destination === \u0026#39;script\u0026#39; || request.destination === \u0026#39;worker\u0026#39;, new workbox.strategies.StaleWhileRevalidate({ cacheName: \u0026#39;assets\u0026#39;, plugins: [ new workbox.expiration.ExpirationPlugin({ maxEntries: 60, maxAgeSeconds: 30 * 24 * 60 * 60, // 30 days }), ], }), ); // Cache images with a Cache-First strategy workbox.routing.registerRoute( ({request}) =\u0026gt; request.destination === \u0026#39;image\u0026#39;, new workbox.strategies.CacheFirst({ cacheName: \u0026#39;images\u0026#39;, plugins: [ new workbox.expiration.ExpirationPlugin({ maxEntries: 60, maxAgeSeconds: 30 * 24 * 60 * 60, // 30 days }), ], }), ); // Use Network-First for API requests workbox.routing.registerRoute( ({url}) =\u0026gt; url.pathname.startsWith(\u0026#39;/api/\u0026#39;), new workbox.strategies.NetworkFirst({ cacheName: \u0026#39;api-responses\u0026#39;, plugins: [ new workbox.expiration.ExpirationPlugin({ maxEntries: 50, maxAgeSeconds: 5 * 60, // 5 minutes }), ], }), ); // Provide a fallback for offline navigation workbox.routing.registerRoute( ({request}) =\u0026gt; request.mode === \u0026#39;navigate\u0026#39;, new workbox.strategies.NetworkFirst({ cacheName: \u0026#39;pages\u0026#39;, plugins: [ new workbox.expiration.ExpirationPlugin({ maxEntries: 50, }), ], }), ); This gives you sophisticated caching strategies with just a few lines of code, and it\u0026rsquo;s what I use in production today.\nThe offline UX: Don\u0026rsquo;t just fail silently A common mistake I see in PWAs is handling offline states as an error rather than a first-class experience. Instead of showing the dreaded Chrome dinosaur, consider:\nCustom offline pages: Create a dedicated offline experience that maintains your branding Optimistic UI: Allow users to continue interacting, queuing actions to be processed when online Background sync: Use the Background Sync API to reconcile changes when connectivity returns For a recent client project, we implemented a \u0026ldquo;read-only mode\u0026rdquo; that activates automatically when offline, allowing users to browse previously accessed content while clearly indicating the connection status.\nTesting offline functionality Testing is essential, but many developers skip this crucial step. Here\u0026rsquo;s my testing routine:\nUse Chrome DevTools\u0026rsquo; Network panel and check \u0026ldquo;Offline\u0026rdquo; Test your application in different states (first visit, return visit) Try Progressive Enhancement: start with the network disabled, then enable it Test your custom offline pages and notification systems Validate with Lighthouse PWA audits For more thorough testing, I use Puppeteer to automate these scenarios in CI/CD pipelines. Here\u0026rsquo;s a more comprehensive test example:\nconst puppeteer = require(\u0026#39;puppeteer\u0026#39;); const assert = require(\u0026#39;assert\u0026#39;); (async () =\u0026gt; { const browser = await puppeteer.launch(); const page = await browser.newPage(); // Visit site and wait for service worker to install await page.goto(\u0026#39;https://my-pwa.com\u0026#39;); await page.waitForFunction(() =\u0026gt; navigator.serviceWorker.controller !== null ); // Save the title for later comparison const onlineTitle = await page.title(); // Check if important content is loaded const productElements = await page.$$(\u0026#39;[data-test=\u0026#34;product-item\u0026#34;]\u0026#39;); assert(productElements.length \u0026gt; 0, \u0026#39;Product items should be displayed while online\u0026#39;); // Cache important page elements by visiting them await page.click(\u0026#39;[data-test=\u0026#34;nav-about\u0026#34;]\u0026#39;); await page.waitForNavigation(); // Go back to home await page.click(\u0026#39;[data-test=\u0026#34;nav-home\u0026#34;]\u0026#39;); await page.waitForNavigation(); // Disable network await page.setOfflineMode(true); // Reload the page and verify it loads from cache await page.reload(); // Verify title matches to confirm page loaded const offlineTitle = await page.title(); assert.strictEqual(offlineTitle, onlineTitle, \u0026#39;Page title should be the same offline\u0026#39;); // Verify offline indicator is shown const offlineIndicator = await page.$(\u0026#39;[data-test=\u0026#34;offline-indicator\u0026#34;]\u0026#39;); assert(offlineIndicator !== null, \u0026#39;Offline indicator should be visible\u0026#39;); // Verify critical content is still available const offlineProductElements = await page.$$(\u0026#39;[data-test=\u0026#34;product-item\u0026#34;]\u0026#39;); assert(offlineProductElements.length \u0026gt; 0, \u0026#39;Product items should still be displayed offline\u0026#39;); // Try to navigate to another cached page await page.click(\u0026#39;[data-test=\u0026#34;nav-about\u0026#34;]\u0026#39;); await page.waitForNavigation({ waitUntil: \u0026#39;networkidle0\u0026#39; }); // Verify we can access this page offline const aboutContent = await page.$(\u0026#39;[data-test=\u0026#34;about-content\u0026#34;]\u0026#39;); assert(aboutContent !== null, \u0026#39;About page content should be accessible offline\u0026#39;); // Try submitting a form offline and verify it\u0026#39;s queued await page.type(\u0026#39;[data-test=\u0026#34;contact-form-email\u0026#34;]\u0026#39;, \u0026#39;test@example.com\u0026#39;); await page.type(\u0026#39;[data-test=\u0026#34;contact-form-message\u0026#34;]\u0026#39;, \u0026#39;Testing offline submission\u0026#39;); await page.click(\u0026#39;[data-test=\u0026#34;contact-form-submit\u0026#34;]\u0026#39;); // Check for queue confirmation message await page.waitForSelector(\u0026#39;[data-test=\u0026#34;offline-queue-confirmation\u0026#34;]\u0026#39;, { visible: true, timeout: 5000 }); // Turn network back on await page.setOfflineMode(false); // Verify background sync indicator appears await page.waitForSelector(\u0026#39;[data-test=\u0026#34;sync-in-progress\u0026#34;]\u0026#39;, { visible: true, timeout: 5000 }); // Wait for sync to complete await page.waitForSelector(\u0026#39;[data-test=\u0026#34;sync-complete\u0026#34;]\u0026#39;, { visible: true, timeout: 10000 }); // Take screenshot of final state await page.screenshot({path: \u0026#39;offline-test-results.png\u0026#39;}); await browser.close(); console.log(\u0026#39;PWA offline functionality test passed!\u0026#39;); })(); This test verifies several critical aspects of offline functionality:\nProper service worker installation Content caching and retrieval when offline Offline state indication to users Navigation between cached pages Form submission queuing when offline Background sync when connection is restored I\u0026rsquo;ve found that using data attributes like data-test=\u0026quot;component-name\u0026quot; makes tests more resilient to UI changes, as recommended by testing best practices. Avoid relying on CSS classes or element structure which might change during design updates.\nAvoiding common pitfalls After implementing offline support for several client projects, I\u0026rsquo;ve identified some recurring issues:\nOver-caching: Be cautious about what you cache and for how long Under-caching: Missing critical assets makes your offline experience break Not handling POST requests: Implement strategies for form submissions while offline Forgetting to version your cache: You need a strategy for cache invalidation when you deploy updates Ignoring the App Shell model: Separate your application shell from your content for better performance Looking forward As PWAs continue to mature, offline support will become not just a requirement but an expectation. By implementing robust offline capabilities now, you\u0026rsquo;re not just preparing for Chrome\u0026rsquo;s upcoming changes—you\u0026rsquo;re building a more resilient application that can provide value to users regardless of their network conditions.\nThe web is evolving beyond its connected origins, embracing capabilities that were once exclusive to native applications. By embracing offline-first thinking, we push this evolution forward and create experiences that truly work for users, not just for ideal network conditions.\n","permalink":"/posts/2021-02-05-pwa-offline-support//posts/2021-02-05-pwa-offline-support/","summary":"\u003cp\u003eProgressive Web Apps have been with us for a while now, transforming the way we think about web applications and blurring the line between native and web experiences. But despite the technology\u0026rsquo;s maturity, I\u0026rsquo;ve found that many developers continue to overlook one of the most powerful PWA features: offline support. And according to recent announcements from the Chrome team, this oversight is about to become a much bigger problem.\u003c/p\u003e\n\u003ch2 id=\"the-coming-offline-requirement\"\u003eThe coming offline requirement\u003c/h2\u003e\n\u003cp\u003eIf you\u0026rsquo;ve deployed a PWA recently and checked your console, you might have noticed a new warning: \u0026ldquo;Page does not work offline.\u0026rdquo; This isn\u0026rsquo;t just a friendly reminder—it\u0026rsquo;s a harbinger of significant changes. Starting with Chrome 93 (scheduled for release in August), Google will be changing their installability criteria to require offline functionality. Without it, your PWA simply won\u0026rsquo;t be installable.\u003c/p\u003e","title":"Beyond the Network: Building Truly Resilient PWAs with Offline Support"},{"content":"Over the past year, GitHub Actions has transformed from an exciting beta feature into a core offering that many developers (myself included) have eagerly adopted for our CI/CD pipelines. The promise is compelling: native automation workflows right where your code lives, with a generous free tier and an ever-growing marketplace of community actions. But as with any maturing technology, the journey hasn\u0026rsquo;t been without its fair share of turbulence.\nThe Good Parts Let\u0026rsquo;s start with what makes GitHub Actions genuinely fantastic:\nNative Integration: There\u0026rsquo;s something undeniably satisfying about having your code, issues, pull requests, and CI/CD all living under one roof. No more context switching between systems or managing separate authentication mechanisms. Everything is right there in GitHub.\nMarketplace Ecosystem: With thousands of community-created actions available, you can often find solutions for common tasks without writing custom code. Need to deploy to AWS, send a Slack notification, or run security scans? There\u0026rsquo;s probably an action for that.\nMatrix Builds: Running tests across multiple environments is trivially easy with matrix configurations. I recently set up a project to test across Node 10, 12, and 14 with just a few lines of YAML.\nWhen I first migrated my projects from CircleCI and Travis CI to GitHub Actions, the experience was nothing short of liberating. No more context-switching between services, no more synchronizing webhooks, and the YAML syntax felt refreshingly straightforward after battling with Jenkins pipelines.\nThe tight integration with the GitHub ecosystem is genuinely impressive. Creating workflows that automatically run on pull requests, label issues based on content, or deploy your application after specific branch merges feels almost magical when it all works seamlessly. Throw in the free tier minutes, and it\u0026rsquo;s no wonder GitHub Actions has gained such rapid adoption among individual developers and teams alike.\nWhen Reality Strikes: The Frustrating Inconsistencies But here\u0026rsquo;s where the honeymoon ends and the marriage work begins. If you\u0026rsquo;ve been using GitHub Actions extensively, you\u0026rsquo;ve likely encountered some of its more\u0026hellip; quirky behaviors.\nTake scheduled workflows, for instance. In theory, they should be a reliable way to trigger periodic tasks:\non: schedule: - cron: \u0026#39;*/5 * * * *\u0026#39; # Run every 5 minutes In practice? As the folks at Upptime recently reported, workflows scheduled to run every five minutes sometimes run as infrequently as once per hour. For critical monitoring tasks or time-sensitive operations, this unpredictability can be a showstopper.\nThe reliability issues don\u0026rsquo;t end there. I\u0026rsquo;ve had workflows mysteriously fail to trigger on push events, only to work flawlessly when manually re-run. The dreaded \u0026ldquo;workflow_dispatch event wasn\u0026rsquo;t triggered for ref\u0026rdquo; error has become an all-too-familiar sight in my GitHub notifications.\nThe Deployment Dilemma For those of us using GitHub Actions for deployment workflows, the challenges become even more pronounced. As Colin Dembovsky aptly points out in his recent post, the environment approval system can be particularly frustrating.\nWant a sequential deployment pipeline that flows from dev to staging to production with approvals? Get ready for a less-than-elegant experience. Each environment requires its own job, and managing approvals becomes cumbersome quickly. The lack of native support for deployment pipelines feels like a significant oversight for a CI/CD platform today.\nHere\u0026rsquo;s a simplified version of what many of us are forced to cobble together:\njobs: deploy-dev: runs-on: ubuntu-latest environment: development steps: # Deploy to dev deploy-staging: needs: deploy-dev runs-on: ubuntu-latest environment: staging steps: # Deploy to staging deploy-prod: needs: deploy-staging runs-on: ubuntu-latest environment: production steps: # Deploy to production Not terrible on the surface, but the workflow quickly becomes unwieldy for complex deployments or when you need to share outputs between jobs. And heaven help you if you need to implement rollbacks or handle deployment failures gracefully.\nWorkflow Secrets: An Incomplete Solution Another pain point that\u0026rsquo;s become increasingly apparent is secrets management. While GitHub\u0026rsquo;s secrets feature works well enough for simple cases, it doesn\u0026rsquo;t scale elegantly for complex projects or organizations.\nWant to share secrets across repositories? You\u0026rsquo;ll need to manually copy them or set up GitHub Actions to synchronize them. Need to rotate secrets? Better update every repository that uses them. And if you\u0026rsquo;re working with multiple environments, the UI quickly becomes cluttered with similarly named secrets for different contexts.\nThe lack of namespacing or hierarchical secrets management feels dated compared to more mature solutions like HashiCorp Vault or even AWS Secrets Manager.\nFinding Workarounds: The Developer\u0026rsquo;s Eternal Task Despite these frustrations, the community has been remarkably resourceful in developing workarounds. One approach I\u0026rsquo;ve found effective for the scheduling reliability issue is to trigger workflows via external services like AWS EventBridge or a simple cron job on a dedicated server.\nFor the deployment pipeline limitations, some teams are using composite actions or reusable workflow snippets to reduce duplication. Others are exploring tools like Terraform or Pulumi to handle the actual deployments, using GitHub Actions primarily as a trigger mechanism.\nA pattern I\u0026rsquo;ve personally adopted is keeping GitHub Actions workflows relatively thin, using them to orchestrate rather than implement complex logic:\njobs: build-and-test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Setup Node uses: actions/setup-node@v2 with: node-version: \u0026#39;14\u0026#39; - run: npm ci - run: npm test deploy: needs: build-and-test runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Deploy with custom script run: ./scripts/deploy.sh env: DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }} This approach keeps the YAML reasonably clean while moving complex logic to scripts that can be tested locally and version-controlled alongside your application code.\nLooking Ahead: The Future of GitHub Actions Despite its current limitations, GitHub Actions continues to evolve at an impressive pace. The marketplace now boasts thousands of community-contributed actions, and GitHub has shown willingness to address pain points with features like environment protection rules and improved logging.\nFor those of us invested in the GitHub ecosystem, there\u0026rsquo;s reason for optimism. Just before the holidays, GitHub quietly rolled out improvements to the workflow editor, and rumors suggest that enhanced deployment pipelines and better secrets management are on the roadmap.\nIn the meantime, the best approach seems to be a pragmatic one: leverage GitHub Actions for what it does well (simple CI/CD, code quality checks, automation of GitHub-specific tasks), while maintaining awareness of its limitations and having fallback strategies for critical functionality.\nThe Verdict: Worth the Hassle (Usually) So, is GitHub Actions worth it? For most projects, I\u0026rsquo;d say yes—with caveats. The convenience of having CI/CD directly integrated with your code repository is significant, especially for smaller teams or individual developers who don\u0026rsquo;t want to manage multiple systems.\nHowever, for mission-critical deployments or workflows where timing precision is essential, it\u0026rsquo;s wise to maintain alternative approaches or at least robust error handling. The platform is maturing rapidly, but it\u0026rsquo;s not yet at the level of reliability offered by some dedicated CI/CD systems that have been in the market longer.\nWhat keeps me on GitHub Actions despite the occasional frustrations? It\u0026rsquo;s the continuous improvement and the vibrant community. Every month brings new features and refinements, and the ecosystem of shared actions means I rarely need to solve a problem from scratch.\nFor now, I\u0026rsquo;ll continue to embrace GitHub Actions—bugs, quirks, and all—while keeping a watchful eye on alternatives and maintaining those essential workarounds that keep my deployments flowing when GitHub\u0026rsquo;s reliability falters.\n","permalink":"/posts/2021-01-31-github-actions//posts/2021-01-31-github-actions/","summary":"\u003cp\u003eOver the past year, GitHub Actions has transformed from an exciting beta feature into a core offering that many developers (myself included) have eagerly adopted for our CI/CD pipelines. The promise is compelling: native automation workflows right where your code lives, with a generous free tier and an ever-growing marketplace of community actions. But as with any maturing technology, the journey hasn\u0026rsquo;t been without its fair share of turbulence.\u003c/p\u003e\n\u003ch2 id=\"the-good-parts\"\u003eThe Good Parts\u003c/h2\u003e\n\u003cp\u003eLet\u0026rsquo;s start with what makes GitHub Actions genuinely fantastic:\u003c/p\u003e","title":"GitHub Actions: Powerful CI/CD with Persistent Growing Pains"},{"content":"When Webpack 5 was officially released in October, the frontend community couldn\u0026rsquo;t stop talking about one feature in particular: Module Federation. After years of grappling with increasingly complex frontend architectures, the promise of being able to seamlessly share code between applications at runtime—not just build time—feels like the solution many of us have been waiting for.\nIf you follow frontend architecture discussions, you\u0026rsquo;ve likely seen the buzz around Module Federation since earlier this year when Zack Jackson first introduced the concept. As he aptly put it, this is \u0026ldquo;the JavaScript bundler equivalent of what Apollo did with GraphQL\u0026rdquo; - a truly revolutionary approach to code sharing.\nAfter watching presentation videos and reading articles about the theoretical benefits, I finally had time to roll up my sleeves and try it myself. In this post, I\u0026rsquo;ll walk through my first experiment with Module Federation in a small-scale project, showing you how to set it up and highlighting the interesting discoveries I made along the way.\nUnderstanding the Problem Before diving into the code, let\u0026rsquo;s briefly outline the problem Module Federation is trying to solve. Traditionally, we\u0026rsquo;ve had a few approaches to share code between applications:\nNPM packages: Extract shared code into libraries, publish them, and import them into each application. This works but creates a tight coupling at build time and requires a full rebuild and deployment cycle for updates.\nMonorepos: Keep all applications in a single repository with shared components. This helps with consistent versioning but doesn\u0026rsquo;t solve the runtime dependency problem.\niframes or runtime loading: These approaches often introduce their own complexities and limitations.\nModule Federation offers a new approach: applications can expose and consume parts of their codebase at runtime without having to package and deploy them separately. As InfoQ quoted Zack Jackson, the motivation was clear: \u0026ldquo;Sharing code is not easy. Depending on your scale, it can even be unprofitable.\u0026rdquo;\nSetting Up Our Experiment For my experiment, I created two simple applications:\nHost App: The main application that will consume components from the remote app. Remote App: A secondary application that exposes components to be consumed by the host. Both applications are basic React apps, but the principles apply to any JavaScript framework. Let\u0026rsquo;s start by setting up the configuration for each.\nThe Remote App Configuration First, we need to configure our remote app to expose certain modules. Here\u0026rsquo;s what my webpack configuration looks like:\nconst { ModuleFederationPlugin } = require(\u0026#34;webpack\u0026#34;).container; const path = require(\u0026#34;path\u0026#34;); const deps = require(\u0026#34;./package.json\u0026#34;).dependencies; module.exports = { entry: \u0026#34;./src/index\u0026#34;, mode: \u0026#34;development\u0026#34;, devServer: { contentBase: path.join(__dirname, \u0026#34;dist\u0026#34;), port: 8081, }, output: { publicPath: \u0026#34;http://localhost:8081/\u0026#34;, }, module: { rules: [ { test: /\\.jsx?$/, loader: \u0026#34;babel-loader\u0026#34;, exclude: /node_modules/, options: { presets: [\u0026#34;@babel/preset-react\u0026#34;], }, }, ], }, plugins: [ new ModuleFederationPlugin({ name: \u0026#34;remote_app\u0026#34;, filename: \u0026#34;remoteEntry.js\u0026#34;, exposes: { \u0026#34;./Button\u0026#34;: \u0026#34;./src/components/Button\u0026#34;, \u0026#34;./Header\u0026#34;: \u0026#34;./src/components/Header\u0026#34;, }, shared: { react: { singleton: true, requiredVersion: deps.react, }, \u0026#34;react-dom\u0026#34;: { singleton: true, requiredVersion: deps[\u0026#34;react-dom\u0026#34;], }, }, }), ], }; The key part here is the ModuleFederationPlugin configuration:\nname: The name of our remote app, which will be used by the host to reference it. filename: The name of the entry file that will be generated. exposes: The modules we want to expose to other applications. In this case, I\u0026rsquo;m exposing two React components. shared: Dependencies that should be shared between the host and remote. This is critical to avoid loading multiple instances of React. The Host App Configuration Now, let\u0026rsquo;s configure our host app to consume the components exposed by the remote:\nconst { ModuleFederationPlugin } = require(\u0026#34;webpack\u0026#34;).container; const path = require(\u0026#34;path\u0026#34;); const deps = require(\u0026#34;./package.json\u0026#34;).dependencies; module.exports = { entry: \u0026#34;./src/index\u0026#34;, mode: \u0026#34;development\u0026#34;, devServer: { contentBase: path.join(__dirname, \u0026#34;dist\u0026#34;), port: 8080, }, output: { publicPath: \u0026#34;http://localhost:8080/\u0026#34;, }, module: { rules: [ { test: /\\.jsx?$/, loader: \u0026#34;babel-loader\u0026#34;, exclude: /node_modules/, options: { presets: [\u0026#34;@babel/preset-react\u0026#34;], }, }, ], }, plugins: [ new ModuleFederationPlugin({ name: \u0026#34;host_app\u0026#34;, remotes: { remote_app: \u0026#34;remote_app@http://localhost:8081/remoteEntry.js\u0026#34;, }, shared: { react: { singleton: true, requiredVersion: deps.react, }, \u0026#34;react-dom\u0026#34;: { singleton: true, requiredVersion: deps[\u0026#34;react-dom\u0026#34;], }, }, }), ], }; The key differences in the host configuration:\nInstead of exposes, we have remotes which defines the remote applications we want to consume from. The format is \u0026quot;name\u0026quot;: \u0026quot;remoteName@remoteUrl/filename\u0026quot;, where the name, remoteName, and filename match what we defined in the remote app. Using Remote Components in the Host App With the configuration in place, we can now use the remote components in our host app. However, there\u0026rsquo;s an important detail: since the remote modules are loaded at runtime, we need to use dynamic imports.\nHere\u0026rsquo;s how I structured my host app\u0026rsquo;s entry point:\n// src/bootstrap.js import React from \u0026#39;react\u0026#39;; import ReactDOM from \u0026#39;react-dom\u0026#39;; import App from \u0026#39;./App\u0026#39;; ReactDOM.render(\u0026lt;App /\u0026gt;, document.getElementById(\u0026#39;root\u0026#39;)); // src/index.js import(\u0026#39;./bootstrap\u0026#39;); And here\u0026rsquo;s how I used the remote components in my App.js:\nimport React, { Suspense, lazy } from \u0026#39;react\u0026#39;; // Import remote components using dynamic import const RemoteButton = lazy(() =\u0026gt; import(\u0026#39;remote_app/Button\u0026#39;)); const RemoteHeader = lazy(() =\u0026gt; import(\u0026#39;remote_app/Header\u0026#39;)); const App = () =\u0026gt; { return ( \u0026lt;div\u0026gt; \u0026lt;h1\u0026gt;Host Application\u0026lt;/h1\u0026gt; \u0026lt;Suspense fallback={\u0026lt;div\u0026gt;Loading Header...\u0026lt;/div\u0026gt;}\u0026gt; \u0026lt;RemoteHeader /\u0026gt; \u0026lt;/Suspense\u0026gt; \u0026lt;div\u0026gt; \u0026lt;p\u0026gt;This is some local content in the host app.\u0026lt;/p\u0026gt; \u0026lt;Suspense fallback={\u0026lt;div\u0026gt;Loading Button...\u0026lt;/div\u0026gt;}\u0026gt; \u0026lt;RemoteButton /\u0026gt; \u0026lt;/Suspense\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; ); }; export default App; Notice how I\u0026rsquo;m using React\u0026rsquo;s Suspense and lazy to handle the asynchronous loading of the remote components. This is a key pattern when working with Module Federation.\nWhat I Learned: Key Insights After getting this basic example working, here are some interesting things I discovered:\n1. The Runtime Connection One of the most fascinating aspects is that the host application is loading the special remoteEntry.js file at runtime. This JavaScript file is only a few KB in size and serves as the orchestration layer between the applications. It\u0026rsquo;s not a traditional entry point but rather a specialized Webpack runtime that provisions the connection to other Webpack builds.\n2. Shared Dependencies Are Crucial The shared configuration is more important than I initially realized. If React isn\u0026rsquo;t properly shared between applications, you can end up with multiple instances of React running, which leads to \u0026ldquo;Invalid Hook Calls\u0026rdquo; and other strange behaviors. By setting singleton: true, we ensure only one instance of React is loaded.\n3. Version Compatibility Is Handled Module Federation can intelligently handle version compatibility between shared dependencies. If the host and remote specify compatible versions (using semver), everything works smoothly. If there\u0026rsquo;s a conflict, Webpack will warn you and may load multiple versions depending on your configuration.\n4. Development Experience During development, I found it quite powerful that I could make changes to the remote app, and upon refresh, those changes would be immediately reflected in the host application. This creates a development experience that feels surprisingly cohesive despite working with separate builds.\nPractical Applications While my experiment was small-scale, it\u0026rsquo;s easy to see how this technology could be applied to larger systems:\nMicro-frontends: Teams can develop and deploy their parts of an application independently, with a main shell application that loads them at runtime.\nDesign Systems: A central team could maintain a design system that other applications consume at runtime, ensuring everyone always has the latest components without needing to update packages.\nFeature Flagging: You could dynamically load different implementations of features based on user preferences or A/B testing requirements.\nGradual Migrations: When migrating from one framework to another, you could have parts of your application in the old framework and parts in the new one, all working together seamlessly.\nLimitations and Considerations While my experience with Module Federation has been mostly positive, there are some considerations to keep in mind:\nNetwork Dependencies: Your application now has runtime dependencies on other applications, which introduces network-related failure modes.\nVersioning Strategy: You need a clear strategy for versioning exposed modules to avoid breaking changes.\nDebugging Complexity: Debugging across module boundaries can be more complex than with a monolithic application.\nBrowser Support: While the webpack runtime works in all modern browsers, you\u0026rsquo;ll need to ensure your transpilation settings accommodate your target browsers.\nConclusion Module Federation represents a significant evolution in how we can structure JavaScript applications, and the excitement around it is well-justified. My small experiment has convinced me that this approach has real potential for improving how we build complex frontend systems. As Paweł Szonecki recently noted, \u0026ldquo;the effort put in at the beginning of its implementation will pay off in the long run during the project.\u0026rdquo;\nThe Webpack team and community contributors have provided us with what might be the most significant advancement in JavaScript architecture this year. If you\u0026rsquo;re dealing with complex frontend applications or considering a micro-frontend approach, I highly recommend giving Module Federation a try. The setup is surprisingly straightforward once you understand the core concepts, and the benefits for code sharing and independent deployments are substantial.\nThe code examples are simplified versions of what I used in my actual experiment, but they capture the essence of how to get started.\n","permalink":"/posts/2020-12-18-webpack-5-module-federation-first//posts/2020-12-18-webpack-5-module-federation-first/","summary":"\u003cp\u003eWhen Webpack 5 was officially released in October, the frontend community couldn\u0026rsquo;t stop talking about one feature in particular: Module Federation. After years of grappling with increasingly complex frontend architectures, the promise of being able to seamlessly share code between applications at runtime—not just build time—feels like the solution many of us have been waiting for.\u003c/p\u003e\n\u003cp\u003eIf you follow frontend architecture discussions, you\u0026rsquo;ve likely seen the buzz around Module Federation since earlier this year when Zack Jackson first \u003ca href=\"https://medium.com/swlh/webpack-5-module-federation-a-game-changer-to-javascript-architecture-bcdd30e02669\"\u003eintroduced the concept\u003c/a\u003e. As he aptly put it, this is \u0026ldquo;the JavaScript bundler equivalent of what Apollo did with GraphQL\u0026rdquo; - a truly revolutionary approach to code sharing.\u003c/p\u003e","title":"Webpack 5 Module Federation: My First Small-Scale Experiment"},{"content":"The journey toward headless architecture is rarely a straight path. Over the past year, I\u0026rsquo;ve helped some organizations transition from traditional content management systems to headless alternatives, and each migration revealed unique challenges and opportunities that don\u0026rsquo;t always make it into the marketing materials. Let\u0026rsquo;s dive into what actually happens when you \u0026ldquo;chop off the head\u0026rdquo; of your CMS—a procedure that sounds more like medieval punishment than modern web architecture.\nThe Promise vs. The Reality We\u0026rsquo;ve all heard the pitch: a headless CMS delivers content through RESTful APIs, freeing it from presentation constraints and enabling true omnichannel publishing. It sounds perfect on paper—store your content once, display it anywhere, from web to mobile to IoT devices. It\u0026rsquo;s the \u0026ldquo;write once, run anywhere\u0026rdquo; dream, except we\u0026rsquo;ve actually managed to make it work this time (looking at you, Java).\nBut as one of my clients discovered when moving from WordPress to Contentful, the initial excitement quickly gives way to the sobering reality of content modeling. Their marketing team had grown accustomed to the WYSIWYG editor and the ability to embed media directly into posts. The transition to structured content required a mental shift that caught them off-guard.\n\u0026ldquo;We spent three weeks just figuring out how to model our blog posts,\u0026rdquo; their tech lead told me. \u0026ldquo;What seemed straightforward in WordPress—like having an image float to the right with text wrapping around it—suddenly required careful planning of content types and relationship fields.\u0026rdquo; They started referring to their content strategy meetings as \u0026ldquo;couples therapy for designers and content editors.\u0026rdquo;\nThe Migration Nightmare That Wasn\u0026rsquo;t Content migration is often cited as the biggest hurdle in adopting a headless CMS. The search results reinforce this, noting how complex it is to implement logic that understands presentational data structures on both sides of the transfer.\nWhen tackling a Drupal-to-Strapi migration for an educational institution, I braced myself for weeks of content freezes and custom migration scripts. I had nightmares about broken internal references and assets vanishing into the digital ether—the kind of dreams where you\u0026rsquo;re falling but instead of ground, there\u0026rsquo;s just an infinite sea of malformed JSON objects.\nInstead of building a one-time migration tool that would be discarded afterward (as is often the case), I took a different approach. I built a lightweight Node.js layer that consumed Drupal\u0026rsquo;s REST export, transformed content into Strapi\u0026rsquo;s expected schema, and used Strapi\u0026rsquo;s API to push content incrementally.\nThis allowed for iterative testing and refinement of the migration process. I could run it multiple times against a staging environment before the final cutover, dramatically reducing the risk. The content freeze ended up lasting just six hours, rather than days.\n// Example snippet the migration utility const drupalContent = await fetchDrupalContent(endpoint); const transformedContent = drupalContent.map(item =\u0026gt; ({ title: item.title[0].value, body: processBodyField(item.body[0].value), // Handle HTML conversion slug: generateSlug(item.title[0].value), categories: mapCategories(item.field_category), featured_image: await migrateImage(item.field_image) })); for (const item of transformedContent) { try { await strapi.create(\u0026#39;article\u0026#39;, item); console.log(`Migrated: ${item.title}`); } catch (err) { console.error(`Failed to migrate: ${item.title}`, err); // Log detailed error for post-migration cleanup fs.appendFileSync(\u0026#39;migration-errors.log\u0026#39;, `${item.title}: ${JSON.stringify(err)}\\n`); } } // Helper function to process Drupal\u0026#39;s HTML content function processBodyField(html) { // Remove Drupal-specific classes const cleanedHtml = html .replace(/class=\u0026#34;drupal-[\\w-]+\u0026#34;/g, \u0026#39;\u0026#39;) .replace(/\u0026lt;!--.*?--\u0026gt;/g, \u0026#39;\u0026#39;); // Remove HTML comments // Transform internal links return cleanedHtml.replace( /href=\u0026#34;\\/node\\/(\\d+)\u0026#34;/g, (match, nodeId) =\u0026gt; `href=\u0026#34;/content/${drupalSlugMap[nodeId] || \u0026#39;\u0026#39;}\u0026#34;` ); } Technology Stack Considerations The choice of headless CMS doesn\u0026rsquo;t exist in isolation—it\u0026rsquo;s part of a larger technology ecosystem. In 2020, the JAMstack approach has gained tremendous momentum, with Netlify and Vercel making deployment and hosting straightforward.\nThe most successful implementations I\u0026rsquo;ve seen pair a headless CMS with Next.js or Gatsby for rendering, a robust CDN for global delivery, CI/CD pipelines that rebuild when content changes, and serverless functions for dynamic functionality. It\u0026rsquo;s like assembling your perfect band—each technology brings its own strengths to create something greater than the sum of its parts (though with considerably less touring and groupies).\nOne e-commerce client I worked with replaced their monolithic Magento store with a headless architecture using Strapi for product content, Shopify\u0026rsquo;s headless commerce APIs for cart and checkout, Next.js for frontend rendering with incremental static regeneration, and Vercel for hosting. They saw page load times drop from 4.2 seconds to under 800ms, and mobile conversion rates improved by 23% within two months of launch.\nHere\u0026rsquo;s a glimpse of the Next.js configuration I used to enable incremental static regeneration for their product pages:\n// next.config.js for an e-commerce site with ISR module.exports = { images: { domains: [ \u0026#39;cdn.shopify.com\u0026#39;, \u0026#39;strapi-uploads.s3.amazonaws.com\u0026#39;, ], }, async redirects() { return [ { source: \u0026#39;/products/old-url/:slug\u0026#39;, destination: \u0026#39;/products/:slug\u0026#39;, permanent: true, }, ]; }, webpack: (config) =\u0026gt; { // Add support for SVG imports config.module.rules.push({ test: /\\.svg$/, use: [\u0026#39;@svgr/webpack\u0026#39;], }); return config; }, }; // In [slug].js product page export async function getStaticProps({ params }) { try { const product = await fetchProductBySlug(params.slug); return { props: { product, }, // Revalidate every 10 minutes - good balance for frequently // changing inventory without hammering the API revalidate: 600, }; } catch (error) { console.error(`Failed to generate product page: ${params.slug}`, error); return { notFound: true }; } } export async function getStaticPaths() { // Only pre-render the top 100 most popular products at build time const popularProducts = await fetchPopularProducts(100); return { paths: popularProducts.map(product =\u0026gt; ({ params: { slug: product.slug }, })), // Generate remaining pages on-demand fallback: \u0026#39;blocking\u0026#39;, }; } Authentication and Permissions: The Hidden Complexity Most headless CMS tutorials show you how to pull public content, but what about authenticated content experiences? This is where numbers of teams hit roadblocks. It\u0026rsquo;s the \u0026ldquo;here be dragons\u0026rdquo; section of the architectural map.\nA membership organization I worked with needed to display different content to members based on their subscription tier. In their WordPress site, this was handled by a membership plugin with template conditionals. Moving to a headless architecture required rethinking their entire authentication flow.\nI ended up implementing JWT authentication through Auth0, role-based content access in their headless CMS (Sanity.io), and server-side rendering with Next.js to prevent authenticated content flashing. The solution works beautifully now, but it took twice as long as initially estimated, largely because authentication was an afterthought in the migration planning. I now have a sticky note on my monitor that reads \u0026ldquo;Remember Auth: It\u0026rsquo;s Not Just for Social Logins\u0026rdquo; as a permanent reminder.\nHere\u0026rsquo;s the authentication middleware(illustrative) I built for their Next.js app:\n// api/middleware/withAuth.js - Next.js API route middleware import { getSession } from \u0026#39;next-auth/client\u0026#39;; import jwt from \u0026#39;jsonwebtoken\u0026#39;; // Middleware to protect API routes and attach user roles to the request export default function withAuth(handler) { return async (req, res) =\u0026gt; { try { // Get session from next-auth const session = await getSession({ req }); if (!session) { return res.status(401).json({ error: \u0026#39;Not authenticated\u0026#39; }); } // Get user details from session const { user } = session; // Create a signed JWT with user data and membership level // to pass to the Sanity API for content permissions const token = jwt.sign( { sub: user.id, name: user.name, email: user.email, membershipLevel: user.membershipLevel || \u0026#39;free\u0026#39;, }, process.env.SANITY_API_TOKEN_SECRET, { expiresIn: \u0026#39;1h\u0026#39; } ); // Attach token and user to the request req.token = token; req.user = user; // Continue to the actual API route handler return handler(req, res); } catch (error) { console.error(\u0026#39;Auth middleware error:\u0026#39;, error); return res.status(500).json({ error: \u0026#39;Authentication error\u0026#39; }); } }; } // Example usage in an API route // pages/api/protected-content.js import withAuth from \u0026#39;../../api/middleware/withAuth\u0026#39;; import { sanityClient } from \u0026#39;../../lib/sanity\u0026#39;; const handler = async (req, res) =\u0026gt; { // The request now has user and token available const { user, token } = req; // Query Sanity with the token for permission-based content const query = `*[_type == \u0026#34;content\u0026#34; \u0026amp;\u0026amp; access \u0026lt;= $membershipLevel] { title, body, \u0026#34;imageUrl\u0026#34;: image.asset-\u0026gt;url }`; const content = await sanityClient.fetch(query, { membershipLevel: user.membershipLevel || \u0026#39;free\u0026#39; }, { headers: { Authorization: `Bearer ${token}` } }); return res.status(200).json(content); }; export default withAuth(handler); Lessons Learned and Best Practices After multiple migrations and implementations, several patterns have emerged for successful headless CMS adoption. Start with thorough content modeling before migration—understand your types, relationships, and presentation needs before writing a single line of code. Build a proof of concept with real content because those abstract demos with lorem ipsum hide the true complexity like makeup on a first date.\nRemember that content editors are primary users, so their experience matters more than developer convenience. Plan for incremental migration instead of big-bang transitions, which tend to explode in exactly the wrong ways. Document your content structure decisions carefully—they\u0026rsquo;re about as easy to change later as your childhood nickname.\nDon\u0026rsquo;t skimp on frontend expertise early in the process. The presentation layer needs careful architecture, especially if performance is a key goal. And always build with scale in mind, because as content grows, query performance becomes crucial. I\u0026rsquo;ve seen GraphQL queries that started out zippy and ended up taking longer to resolve than a government committee decision.\nIs Headless Right for You? Despite the growing popularity, headless isn\u0026rsquo;t the answer for every project. I\u0026rsquo;ve advised several clients to stick with traditional CMS platforms when they lack frontend development resources, have relatively simple content needs, rely on extensive custom workflows in their current CMS, or have teams allergic to change. Not every website needs to be rebuilt as a spaceship when sometimes a reliable sedan gets you there just fine.\nThe most successful transitions happen when organizations understand both the benefits and the trade-offs.\nConclusion Headless CMS adoption represents a fundamental shift in how we think about content. By separating content from presentation through APIs, we gain flexibility and performance, but also take on new responsibilities and challenges.\nIf you\u0026rsquo;re considering this transition, learn from those who\u0026rsquo;ve gone before you. Plan carefully, understand the full scope of the migration, invest in the right skills, and be prepared for unexpected complications. The payoff—a flexible, future-proof content architecture that can adapt to changing presentation needs—is worth the journey for many organizations. Just make sure you go in with eyes wide open and perhaps a sense of humor. You\u0026rsquo;ll need both.\nnote: This post is based on real client experiences, though some details have been changed to protect confidentiality and amplified for comedic effect. No actual CMS heads were harmed in the writing of this article.\n","permalink":"/posts/2020-12-09-headless-cms-in-the-wild//posts/2020-12-09-headless-cms-in-the-wild/","summary":"\u003cp\u003eThe journey toward headless architecture is rarely a straight path. Over the past year, I\u0026rsquo;ve helped some organizations transition from traditional content management systems to headless alternatives, and each migration revealed unique challenges and opportunities that don\u0026rsquo;t always make it into the marketing materials. Let\u0026rsquo;s dive into what actually happens when you \u0026ldquo;chop off the head\u0026rdquo; of your CMS—a procedure that sounds more like medieval punishment than modern web architecture.\u003c/p\u003e","title":"Headless CMS in the Wild: Migration Stories and Strategies"},{"content":"The front-end landscape has been dominated by React, Vue, and Angular for years now. These frameworks have fundamentally transformed how we build web applications, bringing reactivity, component-based architectures, and improved developer experiences. But they\u0026rsquo;ve also introduced significant runtime costs: virtual DOM diffing, component lifecycle management, and substantial JavaScript bundles that users must download and parse before seeing anything meaningful.\nHear out Svelte 3, which has been gaining serious momentum since its release last year. Rather than shipping a runtime library to interpret your components in the browser, Svelte shifts that work to compile time. The result? Dramatically smaller bundles, faster startup times, and pure vanilla JavaScript that runs with minimal overhead.\nI\u0026rsquo;ve been working with Svelte in some projects for the past six months after years of React development, and the difference is striking. Let me walk you through why Svelte\u0026rsquo;s approach matters and how it\u0026rsquo;s changing the game for developers who care about performance without sacrificing developer experience.\nThe Problem with Traditional Frameworks Most popular frameworks today rely on the Virtual DOM pattern. When your application state changes, they build a complete representation of how the DOM should look, compare it to the previous representation, and then apply the minimal set of changes needed to update the actual DOM.\nThis approach works well enough, but it comes with considerable costs. You pay a performance tax for the framework\u0026rsquo;s runtime that grows with your application complexity. Initial page load performance suffers as users wait for large JavaScript bundles to download, parse, and execute before they see anything useful on screen. This is especially problematic for users on mobile devices or slower connections, which still represent a significant portion of global web traffic.\nAs one Redditor put it in a recent thread: \u0026ldquo;React isn\u0026rsquo;t slow, but it\u0026rsquo;s not free.\u0026rdquo; There\u0026rsquo;s always overhead in the abstraction.\nTo illustrate this point, consider a simple counter component in React:\nfunction Counter() { const [count, setCount] = useState(0); return ( \u0026lt;div\u0026gt; \u0026lt;p\u0026gt;You clicked {count} times\u0026lt;/p\u0026gt; \u0026lt;button onClick={() =\u0026gt; setCount(count + 1)}\u0026gt; Click me \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; ); } This seemingly simple component requires React\u0026rsquo;s entire runtime to be downloaded, parsed, and executed before anything appears on screen. For small applications, this might not matter much. But as applications grow in complexity, the overhead becomes more pronounced.\nSvelte\u0026rsquo;s Compiler-Based Approach Svelte takes a radically different approach. Instead of shipping a runtime framework that interprets your component code in the browser, Svelte is primarily a compiler that converts your components into highly optimized vanilla JavaScript at build time.\nHere\u0026rsquo;s the equivalent counter component in Svelte:\n\u0026lt;script\u0026gt; let count = 0; function handleClick() { count += 1; } \u0026lt;/script\u0026gt; \u0026lt;div\u0026gt; \u0026lt;p\u0026gt;You clicked {count} times\u0026lt;/p\u0026gt; \u0026lt;button on:click={handleClick}\u0026gt; Click me \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; What\u0026rsquo;s fascinating is what happens when you build this. Svelte analyzes your code and generates vanilla JavaScript that directly updates the DOM nodes that need changing without shipping unnecessary framework code. The compiler effectively \u0026ldquo;disappears\u0026rdquo; at runtime, leaving only the exact code needed to make your component work. This approach leads to smaller bundle sizes (often dramatically smaller), less JavaScript to parse and execute, no virtual DOM overhead, and ultimately faster initial rendering and updates.\nReactivity Without the Virtual DOM One of Svelte\u0026rsquo;s most elegant features is its approach to reactivity. Instead of relying on immutable state patterns or explicit state management libraries, Svelte uses simple assignment operators and compile-time magic.\nHere\u0026rsquo;s a slightly more complex example showing Svelte\u0026rsquo;s reactivity system:\n\u0026lt;script\u0026gt; let count = 0; let doubleCount; // This $: syntax creates a reactive statement $: doubleCount = count * 2; function increment() { count += 1; } \u0026lt;/script\u0026gt; \u0026lt;button on:click={increment}\u0026gt; Increment \u0026lt;/button\u0026gt; \u0026lt;p\u0026gt;The count is {count}\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;Double the count is {doubleCount}\u0026lt;/p\u0026gt; The $: syntax might look strange at first, but it\u0026rsquo;s actually valid JavaScript (a labeled statement). Svelte uses it to mark statements as reactive - whenever the variables they reference change, the statements run again. The compiler transforms this into efficient code that updates only what\u0026rsquo;s needed.\nNo virtual DOM diffing, no complex state management patterns, just straightforward code that gets compiled into precise DOM updates.\nBuilding Full Applications with Sapper While Svelte is excellent for building components and small apps, Sapper is the official framework for building full-featured applications with Svelte. It\u0026rsquo;s to Svelte what Next.js is to React - providing routing, server-side rendering, code splitting, and a sensible project structure.\nCreating a new Sapper project is straightforward:\nnpx degit \u0026#34;sveltejs/sapper-template#rollup\u0026#34; my-app cd my-app npm install npm run dev This gives you a fully configured Sapper application with file-based routing, server-side rendering by default, code splitting based on routes, and hot module replacement during development.\nA typical Sapper route component looks like this:\n\u0026lt;script context=\u0026#34;module\u0026#34;\u0026gt; // This runs at build time in Node.js or // during server-side rendering export async function preload(page, session) { const { slug } = page.params; const res = await this.fetch(`/blog/${slug}.json`); if (res.ok) { const article = await res.json(); return { article }; } return { status: res.status }; } \u0026lt;/script\u0026gt; \u0026lt;script\u0026gt; export let article; \u0026lt;/script\u0026gt; \u0026lt;h1\u0026gt;{article.title}\u0026lt;/h1\u0026gt; \u0026lt;div class=\u0026#34;content\u0026#34;\u0026gt; {@html article.content} \u0026lt;/div\u0026gt; This creates a blog article page that fetches its data during server-side rendering, delivering fully formed HTML to the browser before any JavaScript runs - perfect for performance and SEO.\nReal-World Performance: Case Studies E-commerce Product Configurator Last quarter, I worked with an e-commerce client who was struggling with their product configurator. Built with React, their interactive tool allowed customers to personalize products with different options, colors, and features. The original implementation had become increasingly sluggish as they added more configuration options.\nUsers reported frustration with lag when changing product options, especially on mobile devices. The issue was particularly pronounced when handling image swapping and price recalculation, with noticeable stuttering during interactions. Customers were abandoning the configurator altogether, directly impacting conversion rates.\nWe rebuilt the configurator in Svelte, keeping the exact same feature set and visual design. The transformation was dramatic: interaction delays dropped from ~260ms to nearly instantaneous responses. The improved performance was achieved through Svelte\u0026rsquo;s direct DOM updates and lightweight reactivity system that avoided the overhead of virtual DOM diffing every time a product option changed.\nThe before-and-after metrics told a compelling story:\nInitial load time decreased from 4.3 seconds to 1.6 seconds on average devices Bundle size shrank from 287KB to 78KB (gzipped) Time from option selection to UI update improved from ~260ms to ~50ms Mobile conversions increased by 18% in the first month after deployment The client\u0026rsquo;s development team, initially skeptical about adopting a less mainstream framework, became enthusiastic Svelte advocates after seeing how much more manageable the codebase became. The simplified state management and component architecture reduced the lines of code by approximately 40%, making future features easier to implement.\nThe Current Ecosystem Svelte\u0026rsquo;s ecosystem is smaller than React\u0026rsquo;s, but it\u0026rsquo;s growing rapidly. The UI component landscape includes Svelte Material UI, Carbon Components Svelte, and Smelte, providing ready-made components for rapid development. Traditional state management libraries are mostly unnecessary due to Svelte\u0026rsquo;s built-in reactivity, though svelte-store exists for complex cases that span multiple components.\nUnlike React, where CSS-in-JS libraries are common, Svelte\u0026rsquo;s built-in scoped styles make additional styling libraries less crucial, though options like svelte-styled-components exist for those who prefer that approach. Svelte also includes powerful built-in transition and animation capabilities that are remarkably easy to use compared to the often complex animation implementations in other frameworks.\nThe development tooling ecosystem continues to mature, with the Svelte REPL making experimentation easy without setting up a project. The official Svelte website provides excellent documentation and tutorials. If you\u0026rsquo;re just getting started, the interactive tutorial is hands-down one of the best framework onboarding experiences I\u0026rsquo;ve encountered.\nTrade-offs and Considerations Svelte isn\u0026rsquo;t without its trade-offs, and it\u0026rsquo;s important to consider these before jumping in. Unlike jQuery or vanilla JS, you can\u0026rsquo;t just drop Svelte into a page with a script tag—you need a build step, though there are some experimental CDN options being developed. The smaller ecosystem means you won\u0026rsquo;t find as many third-party components or solutions as with React or Vue, which can slow development for common patterns that have ready-made solutions in more established frameworks.\nThe compilation-heavy approach has implications too. As one developer noted on deepu.tech, the toolchain can become complex for large applications, with potentially longer build times than you might expect. This is rarely a showstopper but should be factored into your development workflow planning.\nFrom a career perspective, React still dominates job listings, though Svelte positions are steadily increasing. If you\u0026rsquo;re building your resume, having Svelte as a complementary skill to React or Vue rather than a replacement might be strategically wise for now.\nBrowser support is generally excellent for modern browsers, but you\u0026rsquo;ll need to configure your build process carefully if you need to support legacy browsers, as the compiled output uses modern JavaScript features by default.\nMaking the Switch If you\u0026rsquo;re considering Svelte for your next project, I recommend starting with the interactive tutorial on the Svelte website to get familiar with the syntax and mental model. From there, building a small side project will help you get comfortable with the patterns and workflow before committing to larger applications.\nConsider Sapper from the beginning if you\u0026rsquo;re building anything beyond a simple widget—its routing and server-side rendering capabilities provide a solid foundation that\u0026rsquo;s difficult to add retroactively. When you inevitably encounter questions or challenges, take advantage of the friendly Discord community, which has been incredibly welcoming to newcomers.\nFor teams already invested in other frameworks, a complete rewrite is rarely the right answer. Instead, consider using Svelte for new, self-contained features or performance-critical sections of your application. Some developers even follow a \u0026ldquo;Svelte for sites, React for apps\u0026rdquo; approach, choosing tools based on the specific needs of each project.\nLooking Forward Svelte represents a shift in how we think about building for the web. By moving work from runtime to build time, it challenges the assumptions that have dominated front-end development for years. The compiler approach seems likely to influence other frameworks. We\u0026rsquo;re already seeing similar ideas in React\u0026rsquo;s experimental compiler and Vue\u0026rsquo;s composition API. This cross-pollination of ideas ultimately benefits developers and users alike.\nWhether Svelte becomes the dominant framework or remains an influential alternative, its approach to simplicity and performance is reshaping how we think about building for the web.\n","permalink":"/posts/2020-11-02-svelte-3-compiler-as-framework//posts/2020-11-02-svelte-3-compiler-as-framework/","summary":"\u003cp\u003eThe front-end landscape has been dominated by React, Vue, and Angular for years now. These frameworks have fundamentally transformed how we build web applications, bringing reactivity, component-based architectures, and improved developer experiences. But they\u0026rsquo;ve also introduced significant runtime costs: virtual DOM diffing, component lifecycle management, and substantial JavaScript bundles that users must download and parse before seeing anything meaningful.\u003c/p\u003e\n\u003cp\u003eHear out Svelte 3, which has been gaining serious momentum since its release last year. Rather than shipping a runtime library to interpret your components in the browser, Svelte shifts that work to compile time. The result? Dramatically smaller bundles, faster startup times, and pure vanilla JavaScript that runs with minimal overhead.\u003c/p\u003e","title":"Svelte 3: The Compiler as Your Framework"},{"content":"Managing state beyond simple component scope is one of the perennial challenges. Redux was the default answer for anything complex, but the landscape recently feels… different. Hooks have fundamentally changed how we write React, and with them came a resurgence of interest in built-in solutions and even some exciting new contenders.\nWe\u0026rsquo;re seeing teams actively questioning the need for heavyweight libraries on every project. Can the built-in Context API truly handle complex applications? And what about this new experimental library, Recoil, that Facebook dropped on us earlier this year?\nThe Reigning Champ: Redux (Now with Less Boilerplate!) You can\u0026rsquo;t talk React state without talking Redux. It’s battle-tested, has an incredible ecosystem (hello, Redux DevTools!), and enforces a predictable, unidirectional data flow that brings sanity to large applications. For complex state interactions involving many components, especially when those interactions are frequent or asynchronous, Redux has traditionally shone.\nBut let\u0026rsquo;s be honest, the boilerplate has always been a point of contention. Defining actions, action creators, constants, reducers, wiring it all up with connect and mapStateToProps\u0026hellip; it could feel like a lot of ceremony, especially for smaller features.\nThankfully, Redux has also evolved. Redux Toolkit (RTK) has rapidly become the official, recommended way to write Redux logic. If you\u0026rsquo;re still writing Redux \u0026ldquo;by hand,\u0026rdquo; you owe it to yourself to check out RTK.\n// A taste of Redux Toolkit\u0026#39;s createSlice import { createSlice } from \u0026#39;@reduxjs/toolkit\u0026#39;; const counterSlice = createSlice({ name: \u0026#39;counter\u0026#39;, initialState: { value: 0 }, reducers: { increment: state =\u0026gt; { state.value += 1; }, decrement: state =\u0026gt; { state.value -= 1; }, incrementByAmount: (state, action) =\u0026gt; { state.value += action.payload; }, }, }); export const { increment, decrement, incrementByAmount } = counterSlice.actions; export default counterSlice.reducer; Look at that! createSlice generates action creators and action types for you, uses Immer under the hood for \u0026ldquo;mutable\u0026rdquo; reducer logic (making updates way simpler), and generally streamlines the whole process. Combined with the useSelector and useDispatch hooks from react-redux, modern Redux feels much lighter and more integrated with the functional component world ushered in by Hooks.\nThe Verdict: For large-scale apps with complex, shared state, especially those already invested in the Redux ecosystem, RTK makes Redux a very strong and much more developer-friendly option than it used to be. The dev tools alone are often worth the price of admission. If you\u0026rsquo;re migrating an older connect-heavy codebase, moving to RTK and hooks is often the path of least resistance with significant DX wins.\nThe Built-in Contender: Context API + Hooks When Hooks landed, useContext suddenly made React\u0026rsquo;s built-in Context API a much more ergonomic solution for sharing state down the component tree without prop drilling. Paired often with useReducer for managing more complex state logic within a context provider, it offers a compelling alternative without adding external dependencies.\n// Simple Theme Context example import React, { createContext, useState, useContext } from \u0026#39;react\u0026#39;; const ThemeContext = createContext(); export function ThemeProvider({ children }) { const [theme, setTheme] = useState(\u0026#39;light\u0026#39;); const toggleTheme = () =\u0026gt; setTheme(prev =\u0026gt; (prev === \u0026#39;light\u0026#39; ? \u0026#39;dark\u0026#39; : \u0026#39;light\u0026#39;)); // Can also use useReducer here for more complex logic return ( \u0026lt;ThemeContext.Provider value={{ theme, toggleTheme }}\u0026gt; {children} \u0026lt;/ThemeContext.Provider\u0026gt; ); } export function useTheme() { return useContext(ThemeContext); } // In a component: // const { theme, toggleTheme } = useTheme(); This is great for state that doesn\u0026rsquo;t change too frequently, like theme information, user authentication status, or perhaps locale settings. It\u0026rsquo;s straightforward, uses React concepts directly, and avoids bundle size increases. What’s not to like?\nWell, there\u0026rsquo;s a \u0026ldquo;gotcha\u0026rdquo; that many teams are bumping up against: performance. When the value in a Context Provider changes, all components consuming that context via useContext will re-render, even if they only care about a small slice of the context value that didn\u0026rsquo;t actually change. For contexts holding large objects or state that updates very frequently, this can lead to noticeable performance issues as your application grows. Optimization often involves splitting contexts, memoization (React.memo), or reaching for other solutions.\nThe Verdict: Context is fantastic for avoiding prop drilling and managing low-frequency or relatively simple global state. It\u0026rsquo;s built-in and easy to grasp. However, be mindful of the performance implications for high-frequency updates or large state objects. It\u0026rsquo;s not typically a direct replacement for Redux in scenarios where the latter excels.\nThe New Challenger: Recoil Announced by Facebook back in May, Recoil generated a lot of buzz. It\u0026rsquo;s still explicitly marked as experimental, so tread carefully for critical production apps, but the ideas it presents are fascinating and directly target some of the pain points of both Redux and Context.\nRecoil approaches state management in a way that feels very \u0026ldquo;React-y.\u0026rdquo; The core concepts are atoms and selectors.\nAtoms: Units of state. Components can subscribe to individual atoms. When an atom updates, only the components subscribed to that specific atom re-render. Selectors: Pure functions that derive state from atoms or other selectors. Components can subscribe to selectors. Recoil manages dependencies, re-running the selector only when its upstream atoms/selectors change, and re-rendering subscribed components only when the selector\u0026rsquo;s output value changes. // Conceptual Recoil example import { atom, selector, useRecoilState, useRecoilValue } from \u0026#39;recoil\u0026#39;; const fontSizeState = atom({ key: \u0026#39;fontSizeState\u0026#39;, default: 14, }); const emphasizedTextState = selector({ key: \u0026#39;emphasizedTextState\u0026#39;, get: ({ get }) =\u0026gt; { const text = get(someOtherTextAtom); // Depends on another atom return text.toUpperCase(); }, }); // In a component: // const [fontSize, setFontSize] = useRecoilState(fontSizeState); // const emphasizedText = useRecoilValue(emphasizedTextState); The big promise here is performance through fine-grained subscriptions. By subscribing only to the atoms/selectors they actually need, components avoid the unnecessary re-renders often seen with Context. It also offers built-in solutions for asynchronous operations (async selectors) and derived state, potentially simplifying patterns that require middleware like Thunk or Saga in Redux.\nThe Verdict: Recoil is exciting and potentially offers the best of both worlds: the simplicity and React-idiomatic feel closer to Context, but with performance characteristics potentially better suited for complex, dynamic state, closer to Redux (without the historical boilerplate). The main caveats right now are its experimental status, smaller community/ecosystem compared to Redux, and the possibility of API changes before a stable 1.0 release. Teams are cautiously experimenting, especially on new projects or specific features where Context performance stings.\nMaking the Choice: It\u0026rsquo;s Complicated (in a Good Way) So, which one should you use? The answer, frustratingly but realistically, is \u0026ldquo;it depends.\u0026rdquo;\nLarge legacy app already on Redux? Migrating to Redux Toolkit and hooks is likely your best bet for modernization and improved developer experience. New app with complex global state needs? RTK is a mature, robust choice. Recoil is a promising, potentially simpler alternative if you\u0026rsquo;re willing to adopt an experimental library. Need to share simple, relatively static state (theme, auth)? Context API is often perfectly sufficient and keeps your dependency list lean. Hitting Context performance issues? Consider splitting contexts, memoization, or evaluating if Recoil (or even RTK) might be a better fit for that specific slice of state. We\u0026rsquo;re seeing many teams adopt a hybrid approach: using Context for simple cases like theming, maybe keeping core complex business logic in Redux (especially if already there), and perhaps experimenting with Recoil for new, isolated feature state.\nThe good news is that we have more viable options than ever. The shift triggered by Hooks has pushed the community to re-evaluate established patterns and embrace solutions that better fit the modern React paradigm. Redux has adapted well with RTK, Context has become genuinely useful, and newcomers like Recoil are pushing the boundaries. It\u0026rsquo;s a better time to be managing state in React.\n","permalink":"/posts/2020-10-26-redux-context-recoil//posts/2020-10-26-redux-context-recoil/","summary":"\u003cp\u003eManaging state beyond simple component scope is one of the perennial challenges. Redux was the default answer for anything complex, but the landscape recently feels… different. Hooks have fundamentally changed how we write React, and with them came a resurgence of interest in built-in solutions and even some exciting new contenders.\u003c/p\u003e\n\u003cp\u003eWe\u0026rsquo;re seeing teams actively questioning the need for heavyweight libraries on every project. Can the built-in Context API truly handle complex applications? And what about this new experimental library, Recoil, that Facebook dropped on us earlier this year?\u003c/p\u003e","title":"Navigating React State: Redux, Context, and the New Kid, Recoil"},{"content":"The SPA security landscape continues to evolve rapidly this year, and if you\u0026rsquo;re still using the Implicit OAuth flow that was recommended just a few years ago, it\u0026rsquo;s time for a serious rethink of your authentication architecture. With increasing browser restrictions and an evolving threat model, our frontend security approaches need a refresh.\nThe Problem with Implicit Flow For years, the OAuth 2.0 Implicit flow was the go-to authentication pattern for single-page applications. The reasoning seemed sound: since SPAs couldn\u0026rsquo;t securely store client secrets (being fully client-side), we\u0026rsquo;d use a simplified flow that returned tokens directly in the URL fragment.\nBut as we\u0026rsquo;ve learned more about browser security models and XSS vulnerabilities, the security limitations of this approach have become apparent:\nAccess tokens are exposed in browser history No refresh token capability (leading to frequent re-authentication) Limited token validation capabilities Higher vulnerability to cross-site scripting attacks The OAuth working group has even formally updated their security guidance to recommend against using the Implicit flow for new applications. So what\u0026rsquo;s the alternative?\nEnter Authorization Code Flow with PKCE The Authorization Code flow with PKCE (Proof Key for Code Exchange) has emerged as the recommended approach for securing SPAs. Originally designed for mobile applications, PKCE brings the security benefits of the Authorization Code flow without requiring a client secret.\nAs pragmaticwebsecurity.com notes, \u0026ldquo;PKCE effectively links the initialization of the flow to the finalization of the flow\u0026rdquo; by acting like a one-time password that authenticates a specific client instance. This prevents authorization code interception attacks that public clients would otherwise be vulnerable to.\nLet\u0026rsquo;s look at how this flow works:\nYour application generates a cryptographically random string (the code verifier) The application derives a code challenge from the verifier using SHA-256 The authorization request includes this code challenge After user authentication, your app receives an authorization code When exchanging this code for tokens, your app sends the original code verifier The authorization server verifies that the challenge and verifier match This flow provides significant security advantages while improving user experience through refresh token support.\nImplementing PKCE in React Applications Here\u0026rsquo;s a simplified example of implementing PKCE in a React application using a modern auth library:\n// Generate code verifier and challenge function generateCodeVerifier() { const array = new Uint8Array(32); window.crypto.getRandomValues(array); return base64UrlEncode(array); } function generateCodeChallenge(codeVerifier) { return crypto.subtle.digest(\u0026#39;SHA-256\u0026#39;, new TextEncoder().encode(codeVerifier)) .then(digest =\u0026gt; base64UrlEncode(new Uint8Array(digest))); } // In your login component async function initiateLogin() { // Generate and store PKCE values const codeVerifier = generateCodeVerifier(); localStorage.setItem(\u0026#39;code_verifier\u0026#39;, codeVerifier); const codeChallenge = await generateCodeChallenge(codeVerifier); // Redirect to authorization endpoint window.location = `${authEndpoint}?` + `client_id=${clientId}\u0026amp;` + `redirect_uri=${redirectUri}\u0026amp;` + `response_type=code\u0026amp;` + `code_challenge=${codeChallenge}\u0026amp;` + `code_challenge_method=S256`; } Don\u0026rsquo;t roll out your own authorization servers by hand. Use solid libraries instead. Established libraries like Auth0 SPA SDK, Okta\u0026rsquo;s Auth JS, or AWS Amplify have implemented these patterns with thorough security reviews.\nRotating Refresh Tokens While PKCE helps us securely obtain access and refresh tokens, we need strategies to manage these tokens securely. Rotating refresh tokens is an emerging best practice that adds a crucial layer of security.\nThe concept is straightforward:\nEach time you use a refresh token to get a new access token The authorization server invalidates the old refresh token And issues a new refresh token alongside the new access token This approach limits the damage from a leaked refresh token since each refresh token can only be used once. If an attacker somehow obtains a refresh token, it will likely be invalidated before they can use it.\nMany major providers now support rotating refresh tokens, including Auth0, Okta, and Azure AD B2C. When configuring your client, look for options like \u0026ldquo;rotation\u0026rdquo; or \u0026ldquo;one-time use\u0026rdquo; for refresh tokens.\nXSS Mitigation: Defense in Depth Even with PKCE and rotating refresh tokens, cross-site scripting (XSS) remains a significant threat to SPAs. A successful XSS attack could still compromise tokens stored in memory or intercept them during the authentication flow.\nImplement these additional protections:\nContent Security Policy (CSP) - Restrict what resources can be loaded and executed in your application:\n\u0026lt;meta http-equiv=\u0026#34;Content-Security-Policy\u0026#34; content=\u0026#34;default-src \u0026#39;self\u0026#39;; script-src \u0026#39;self\u0026#39;; connect-src \u0026#39;self\u0026#39; https://api.yourdomain.com https://auth.yourdomain.com;\u0026#34;\u0026gt; HttpOnly and SameSite cookies - If your authentication architecture uses cookies for maintaining session state (even in part):\nSet-Cookie: session=123; HttpOnly; Secure; SameSite=Strict State validation - Always validate that the OAuth state parameter returned matches what was sent.\nX-XSS-Protection and Referrer-Policy headers - Additional HTTP headers that add layers of protection:\nX-XSS-Protection: 1; mode=block Referrer-Policy: strict-origin-when-cross-origin Auditing Client-Side Routing Modern SPAs commonly use client-side routing libraries like React Router, Vue Router, or Angular Router. These routers deserve special attention when conducting security audits.\nCommon vulnerabilities include:\nParameter pollution - Ensure route parameters are correctly sanitized before use Route redirection attacks - Validate that redirects only go to trusted origins Token exposure in URLs - Never include tokens or sensitive data in URL parameters Review your routes with a security mindset:\n// Vulnerable approach - could lead to XSS const UserProfile = ({ match }) =\u0026gt; { const { username } = match.params; return \u0026lt;div dangerouslySetInnerHTML={{ __html: `Profile for ${username}` }} /\u0026gt;; }; // Safer approach const UserProfile = ({ match }) =\u0026gt; { const { username } = match.params; return \u0026lt;div\u0026gt;Profile for {username}\u0026lt;/div\u0026gt;; }; State Management Security The way you store and manipulate state can significantly impact your application\u0026rsquo;s security posture.\nRedux/Vuex State Serialization - Be cautious when persisting state:\n// Configure Redux Persist carefully const persistConfig = { key: \u0026#39;root\u0026#39;, storage, // Never persist authentication tokens in localStorage blacklist: [\u0026#39;auth\u0026#39;] }; Memory-Only Token Storage - Keep tokens in memory, not localStorage or sessionStorage:\n// In your auth service let accessToken = null; let tokenExpiry = null; let refreshToken = null; export function setTokens(tokens) { accessToken = tokens.access_token; tokenExpiry = new Date(Date.now() + tokens.expires_in * 1000); refreshToken = tokens.refresh_token; } export function getAccessToken() { if (tokenExpiry \u0026amp;\u0026amp; tokenExpiry \u0026gt; new Date()) { return accessToken; } else { // Handle refreshing logic return refreshTokenIfNeeded(); } } Real-World Migration Experiences Recently guided a team migrating a complex React + Redux application from Implicit flow to Authorization Code with PKCE. The actual authentication code changes were relatively straightforward, but we encountered several interesting challenges:\nAPI Coordination - Our backend had to be updated to validate new token formats and handle refresh requests Session Management - The longer-lived sessions (via refresh tokens) required new UX considerations for session timeouts and activity tracking Testing Complexity - Our test suites needed significant updates to handle the more complex auth flow Despite the challenges, the migration had clear benefits. User experience improved dramatically with fewer re-authentications, and the security team was much happier with the updated architecture.\nConclusion The security landscape for SPAs continues to evolve rapidly. The shift from Implicit flow to Authorization Code with PKCE represents a significant improvement in security without compromising user experience. In fact, by enabling the use of refresh tokens, it actually improves UX by reducing the frequency of logins.\n","permalink":"/posts/2020-09-12-authentication-for-spa//posts/2020-09-12-authentication-for-spa/","summary":"\u003cp\u003eThe SPA security landscape continues to evolve rapidly this year, and if you\u0026rsquo;re still using the Implicit OAuth flow that was recommended just a few years ago, it\u0026rsquo;s time for a serious rethink of your authentication architecture. With increasing browser restrictions and an evolving threat model, our frontend security approaches need a refresh.\u003c/p\u003e\n\u003ch2 id=\"the-problem-with-implicit-flow\"\u003eThe Problem with Implicit Flow\u003c/h2\u003e\n\u003cp\u003eFor years, the OAuth 2.0 Implicit flow was the go-to authentication pattern for single-page applications. The reasoning seemed sound: since SPAs couldn\u0026rsquo;t securely store client secrets (being fully client-side), we\u0026rsquo;d use a simplified flow that returned tokens directly in the URL fragment.\u003c/p\u003e","title":"Modern Authentication for Single Page Applications"},{"content":"In the quest for better user engagement, we\u0026rsquo;ve collectively moved past the flashy animations of the early web (farewell, Flash) to embrace more considered, purpose-driven motion design. As we navigate the challenges of 2020\u0026rsquo;s remote-first world, keeping users engaged with our interfaces has never been more critical. Let\u0026rsquo;s explore how to implement effective motion UI with current tools and techniques that won\u0026rsquo;t tank your performance.\nThe Psychology Behind Motion Before diving into implementation, it\u0026rsquo;s worth understanding why we\u0026rsquo;re adding motion in the first place. Motion isn\u0026rsquo;t just eye candy—it creates cognitive anchors that help users understand what\u0026rsquo;s happening in your interface.\nDisney figured this out decades ago with their famous 12 principles of animation. Many of these principles translate surprisingly well to UI design. As dribbble.com notes, \u0026ldquo;Most of the objects in the real world follow motions which are eased.\u0026rdquo; This natural movement creates interfaces that feel alive rather than mechanical.\nThe key, however, is subtlety. I\u0026rsquo;ve seen too many sites lately where developers went overboard with animations, creating experiences that feel like navigating through molasses.\nPopular Motion Libraries in 2020 Several libraries have emerged as front-runners this year:\nFramer Motion - React\u0026rsquo;s golden child for animations GSAP - The venerable animation platform for complex sequences Motion UI by ZURB - Built on Sass, perfect for Foundation users Anime.js - Lightweight JavaScript animation library React Spring - Physics-based animations for React For this post, I\u0026rsquo;ll focus on Motion UI by ZURB, which is particularly well-suited for scroll-triggered animations and micro-interactions using Sass.\nSetting Up Motion UI with Sass If you\u0026rsquo;re using Foundation, Motion UI might already be part of your workflow. If not, installation is straightforward:\nnpm install motion-ui --save Then in your main Sass file:\n@import \u0026#39;motion-ui\u0026#39;; @include motion-ui-transitions; @include motion-ui-animations; What\u0026rsquo;s great about Motion UI is how it leverages Sass mixins to create transition classes that you can apply directly in your markup:\n.fade-in-element { @include mui-hinge( $state: in, $from: top, $fade: true, $timing: 0.5s ); } Scroll-Triggered Animations: The Right Way Scroll-triggered animations have become ubiquitous, but implementing them efficiently is another story. Here\u0026rsquo;s a pattern I\u0026rsquo;ve found effective:\n// Using Intersection Observer API (now widely supported) const observer = new IntersectionObserver(entries =\u0026gt; { entries.forEach(entry =\u0026gt; { if (entry.isIntersecting) { entry.target.classList.add(\u0026#39;is-visible\u0026#39;); observer.unobserve(entry.target); // Stop observing once triggered } }); }, { threshold: 0.1 // Trigger when 10% of element is visible }); // Target all elements with animation classes document.querySelectorAll(\u0026#39;.animate-on-scroll\u0026#39;).forEach(element =\u0026gt; { observer.observe(element); }); Paired with your Motion UI classes:\n.animate-on-scroll { opacity: 0; transition: opacity 0.3s ease-out; \u0026amp;.is-visible { opacity: 1; } } This approach is vastly superior to scroll event listeners which fire constantly and can cripple performance.\nMicro-Interactions for Feedback Micro-interactions provide immediate feedback to user actions. Consider this example for a form button:\n@mixin button-interaction { transition: transform 0.15s ease-in-out, box-shadow 0.15s ease-in-out; \u0026amp;:hover { transform: translateY(-2px); box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); } \u0026amp;:active { transform: translateY(0); box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1); } } .submit-button { @include button-interaction; } These subtle cues mimic physical interactions, making interfaces feel more responsive and tangible.\nPerformance Considerations Animation performance is a critical concern, especially for mobile users who might be on slower devices or metered connections.\nHere are my rules of thumb:\nStick to opacity and transform properties - They\u0026rsquo;re GPU-accelerated in modern browsers Avoid animating layout properties - Things like width, height, and margin force costly reflows Use the will-change property sparingly - It\u0026rsquo;s powerful but can backfire if overused .performant-animation { transform: translateZ(0); // Hardware acceleration hack transition: transform 0.3s cubic-bezier(0.25, 0.1, 0.25, 1); \u0026amp;:hover { transform: scale(1.05); } } Real-World Example: Navigation Transition Let\u0026rsquo;s build a practical example. Here\u0026rsquo;s a responsive main navigation with scroll-triggered behavior:\n// Main Sass file @import \u0026#39;motion-ui\u0026#39;; // Custom transitions @include motion-ui-transitions; @include motion-ui-animations; // Navigation styles .main-nav { position: fixed; top: 0; left: 0; right: 0; padding: 20px; transition: background-color 0.3s ease, padding 0.3s ease; \u0026amp;.scrolled { background-color: rgba(255, 255, 255, 0.95); padding: 10px 20px; box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1); .nav-link { color: #333; } } .nav-link { color: white; transition: color 0.3s ease; \u0026amp;:hover { @include mui-zoom( $from: 1, $to: 1.1, $duration: 0.2s ); } } } // JavaScript to handle scroll behavior const nav = document.querySelector(\u0026#39;.main-nav\u0026#39;); let scrollThreshold = 100; window.addEventListener(\u0026#39;scroll\u0026#39;, () =\u0026gt; { if (window.scrollY \u0026gt; scrollThreshold) { nav.classList.add(\u0026#39;scrolled\u0026#39;); } else { nav.classList.remove(\u0026#39;scrolled\u0026#39;); } }, { passive: true }); // Performance optimization This creates a navigation that subtly transforms as the user scrolls down the page, providing context about their position while maintaining access to navigation controls.\nWhere Do We Go From Here? Several design systems have begun treating motion as a first-class citizen. Carbon and Lightning have considered motion as an important aspect of design and formulated specifications for timing, easing and other transition parameters.\nThis standardization helps create consistent experiences across products and reduces the cognitive load on both designers and developers.\nFinding Inspiration If you\u0026rsquo;re looking to level up your animation game, there are countless inspirational sources. For animated backgrounds specifically, Prototypr.io recently published \u0026ldquo;5 CSS Animated Backgrounds to Inspire Your Next Project,\u0026rdquo; showcasing some creative approaches from simple to complex.\nConclusion Effective motion design is about enhancing the user experience rather than distracting from it. Some users might appreciate subtle animations, while others might prefer more pronounced effects.\u0026quot;\nThe tools available to us today make implementing thoughtful animations more accessible than ever. Whether you\u0026rsquo;re using Motion UI with Sass, Framer Motion with React, or vanilla CSS transitions, the principles remain the same: be intentional, be performant, and above all, serve the user\u0026rsquo;s needs.\n","permalink":"/posts/2020-07-28-motion-with-intent//posts/2020-07-28-motion-with-intent/","summary":"\u003cp\u003eIn the quest for better user engagement, we\u0026rsquo;ve collectively moved past the flashy animations of the early web (farewell, Flash) to embrace more considered, purpose-driven motion design. As we navigate the challenges of 2020\u0026rsquo;s remote-first world, keeping users engaged with our interfaces has never been more critical. Let\u0026rsquo;s explore how to implement effective motion UI with current tools and techniques that won\u0026rsquo;t tank your performance.\u003c/p\u003e\n\u003ch2 id=\"the-psychology-behind-motion\"\u003eThe Psychology Behind Motion\u003c/h2\u003e\n\u003cp\u003eBefore diving into implementation, it\u0026rsquo;s worth understanding why we\u0026rsquo;re adding motion in the first place. Motion isn\u0026rsquo;t just eye candy—it creates cognitive anchors that help users understand what\u0026rsquo;s happening in your interface.\u003c/p\u003e","title":"Motion with Intent: Engineering UI Animations That Elevate User Experience"},{"content":"As we find ourselves firmly in the middle of 2020 (what a year so far, right?), web developers have spent several years working with WebAssembly (WASM). What began as an exciting experiment has now matured into a robust, production-ready technology with widespread browser support and growing adoption across the industry.\nThe Performance Promise JavaScript has served us well for decades, but we\u0026rsquo;ve all hit that performance wall, the point where no amount of optimization seems to help. Whether you\u0026rsquo;re building data visualization tools, complex animations, or processing user-generated content, JavaScript\u0026rsquo;s interpreted nature introduces unavoidable overhead.\nEnter WebAssembly (or Wasm), a binary instruction format that runs at near-native speeds in the browser. Now with over three years of development and refinement, WASM has proven its capability to deliver predictable, high-performance execution across all modern browsers since becoming a W3C recommendation in December 2019.\nBut is the performance gain worth the added complexity? Let\u0026rsquo;s find out by diving into a small example.\nImage Processing: A Perfect Test Case Image manipulation is a classic example of computationally intensive work when implemented in pure JavaScript. Let\u0026rsquo;s build a simple brightness adjustment function in both JavaScript and WebAssembly to see the difference.\nThe JavaScript Approach Here\u0026rsquo;s how we might adjust the brightness of an image using vanilla JavaScript:\nfunction adjustBrightness(imageData, intensity) { const pixels = imageData.data; for (let i = 0; i \u0026lt; pixels.length; i += 4) { pixels[i] = Math.min(255, pixels[i] + intensity); // Red pixels[i+1] = Math.min(255, pixels[i+1] + intensity); // Green pixels[i+2] = Math.min(255, pixels[i+2] + intensity); // Blue // Alpha channel (i+3) remains unchanged } return imageData; } Simple enough, right? For each pixel in our image, we increase the RGB values by the specified intensity, making sure not to exceed 255.\nThe WebAssembly Approach To leverage WebAssembly, I\u0026rsquo;ll use C and Emscripten, which has matured nicely since its early days. If you haven\u0026rsquo;t set up Emscripten yet, the official documentation provides clear installation instructions.\nHere\u0026rsquo;s our C implementation:\n// brightness.c #include \u0026lt;emscripten.h\u0026gt; #include \u0026lt;stdint.h\u0026gt; EMSCRIPTEN_KEEPALIVE void processBrightness(uint8_t* pixels, int length, int intensity) { for (int i = 0; i \u0026lt; length; i += 4) { int r = pixels[i] + intensity; int g = pixels[i+1] + intensity; int b = pixels[i+2] + intensity; pixels[i] = (r \u0026gt; 255) ? 255 : r; pixels[i+1] = (g \u0026gt; 255) ? 255 : g; pixels[i+2] = (b \u0026gt; 255) ? 255 : b; // Alpha remains unchanged } } Compile this to WebAssembly using Emscripten:\nemcc brightness.c -o brightness.js -s WASM=1 -s EXPORTED_FUNCTIONS=\u0026#34;[\u0026#39;_processBrightness\u0026#39;]\u0026#34; -O3 This command generates both brightness.js (glue code) and brightness.wasm (the actual WebAssembly module).\nIntegrating WebAssembly Into Your App Now let\u0026rsquo;s bring everything together:\n// Load the WebAssembly module let wasmModule; fetch(\u0026#39;brightness.wasm\u0026#39;) .then(response =\u0026gt; response.arrayBuffer()) .then(buffer =\u0026gt; WebAssembly.instantiate(buffer)) .then(module =\u0026gt; { wasmModule = module.instance; console.log(\u0026#39;WebAssembly module loaded!\u0026#39;); enableUI(); // Enable our UI now that everything is loaded }); // Process with WebAssembly function processImageWithWasm(imageData, intensity) { const pixels = imageData.data; // Get the memory from the WASM module const memory = wasmModule.exports.memory; // Allocate memory in the WebAssembly instance const ptr = wasmModule.exports.malloc(pixels.length); // Create a view into WebAssembly memory const heap = new Uint8Array(memory.buffer, ptr, pixels.length); // Copy image data to WebAssembly memory heap.set(new Uint8Array(pixels.buffer)); // Call the WebAssembly function wasmModule.exports.processBrightness(ptr, pixels.length, intensity); // Copy the result back pixels.set(heap.subarray(0, pixels.length)); // Free the allocated memory wasmModule.exports.free(ptr); return imageData; } The Performance Showdown I tested both implementations on a variety of images and measured execution times. The results speak for themselves:\nJavaScript Processing: 200-300ms for a 1080p image WebAssembly Processing: 40-60ms for the same image That\u0026rsquo;s a 4-5x performance improvement! And this is for a relatively simple operation, the gap widens further with more complex algorithms like convolutions or Fourier transforms.\nReal-world Considerations While WebAssembly clearly wins the performance race, there are trade-offs to consider:\nThe Good Predictable Performance: WebAssembly execution times are more consistent, making it ideal for applications requiring smooth user experiences.\nLanguage Choice: You\u0026rsquo;re not limited to JavaScript anymore. C, C++, Rust, and even Go can now be part of your web development toolkit.\nMature Ecosystem: After three years of widespread browser support (Firefox and Chrome in 2017, followed by Edge and Safari), WebAssembly now benefits from established toolchains, comprehensive documentation, and growing community knowledge.\nThe Challenges Development Complexity: The build process is more involved, and you\u0026rsquo;ll need to be comfortable with lower-level languages.\nDebugging: While tools like Chrome DevTools are adding WebAssembly support, debugging is still more challenging than with JavaScript.\nMemory Management: Working directly with memory requires careful management to avoid leaks and fragmentation.\nWho\u0026rsquo;s Using WebAssembly Today? The adoption of WebAssembly continues to grow:\nFigma leverages WebAssembly for their design tool, achieving desktop-quality performance in the browser.\nGoogle Earth uses WebAssembly to deliver their full 3D mapping experience on the web.\nAutoCAD Web brings professional CAD capabilities to browsers through WebAssembly.\nSquoosh (Google\u0026rsquo;s image compression app) demonstrates impressive performance gains in image processing.\nGetting Started Today If you\u0026rsquo;re intrigued and want to start exploring WebAssembly, here\u0026rsquo;s a modern workflow that works well:\nChoose your language: Rust is gaining popularity for WebAssembly development thanks to its memory safety and performance, but C/C++ with Emscripten remains a solid choice.\nTooling:\nwasm-pack for Rust developers Emscripten for C/C++ developers AssemblyScript for TypeScript developers wanting an easier entry point Learn the basics: Mozilla\u0026rsquo;s WebAssembly documentation provides an excellent overview.\nStart small: Begin by porting small, performance-critical functions rather than entire applications.\nConclusion WebAssembly is proving itself as more than just a JavaScript replacement, it\u0026rsquo;s a complementary technology that enables new capabilities for web applications. While it\u0026rsquo;s not the right solution for every problem, it\u0026rsquo;s an invaluable tool for performance-critical tasks.\nThe ecosystem is rapidly evolving, with new tools and frameworks appearing regularly. Projects like the WebAssembly System Interface (WASI) are expanding its potential beyond browsers, hinting at a future where WebAssembly becomes a universal runtime.\nFor now, though, if your web application is hitting JavaScript performance limits, especially with data processing, graphics, or complex algorithms. It\u0026rsquo;s a perfect time to explore what WebAssembly can do for you.\nHappy coding, and stay safe in these unusual times!\n","permalink":"/posts/2020-05-16-accelerating-the-web-with-wasm//posts/2020-05-16-accelerating-the-web-with-wasm/","summary":"\u003cp\u003eAs we find ourselves firmly in the middle of 2020 (what a year so far, right?), web developers have spent several years working with WebAssembly (WASM). What began as an exciting experiment has now matured into a robust, production-ready technology with widespread browser support and growing adoption across the industry.\u003c/p\u003e\n\u003ch2 id=\"the-performance-promise\"\u003eThe Performance Promise\u003c/h2\u003e\n\u003cp\u003eJavaScript has served us well for decades, but we\u0026rsquo;ve all hit that performance wall, the point where no amount of optimization seems to help. Whether you\u0026rsquo;re building data visualization tools, complex animations, or processing user-generated content, JavaScript\u0026rsquo;s interpreted nature introduces unavoidable overhead.\u003c/p\u003e","title":"Testing WebAssembly Performance: Image Processing Example"}]