Skip to main content

A Vue Component Library Template With TypeScript and Vite

· 3 min read

I wanted to create a Vue component library, and it took me a few tries to get it right. The particular difficulty lies in how TypeScript and Vite need to be configured to generate the correct files for publishing.

This article serves as a note for me to understand how to set up a Vue component project, and it is up-to-date as of 2025. The sample component starter is available at GitHub, which can be used as a template, forked, or cloned for a quick start.

Below, I outline some critical points that I briefly walked through in the README of the starter template, which are the trickiest aspects to get right.

The Build Process

Usually, we distribute JavaScript files along with type declarations so that the component can be used in TypeScript projects seamlessly. While Vite can generate both JavaScript files from TypeScript and type declarations (via a Vite plugin), we can separate the workload between Vite and the TypeScript compiler. The split is as follows:

  • vue-tsc (a wrapper around tsc) will generate type declaration files.
  • Vite will emit JavaScript (and other) files.

vite.config.ts

In vite.config.ts, we need to configure the build settings to ensure the project is ready as a library. This includes defining the entry point and specifying the file name of the final JavaScript output.

One issue I got stuck on was the build process: Vite automatically deletes the entire dist folder, which removes the type declaration files generated by vue-tsc. This can be avoided by setting emptyOutDir: false. With this, we must configure the build script to clean the folder at the start of every build manually. Another approach is to adjust the procedure—run vite build first, then generate the type files.

package.json

Several fields are important for configuring the project:

  • exports: Defines how your component can be consumed, including specifying where the type information is located.
  • build: If we need to manually empty the output directory, we may use something like:
    "build": "rm -rf dist && vue-tsc -b && vite build"
  • files: Controls which files are uploaded to the npm registry.

tsconfig.app.json

In a project scaffolded from Vite's vue-ts template, it separated TypeScript settings into different JSON files. The most important one is tsconfig.app.json, which controls TypeScript compilation for the Vue component.

{
"noEmit": false, // This is important to generate type declaration files
"emitDeclarationOnly": true, // Prevents the generation of JS files, which is handled by Vite
"declaration": true,
"declarationMap": true,
"outDir": "dist/types",
"rootDir": "src"
}

I struggled for a long time trying to figure out why no type declarations were generated despite tweaking the configuration. It turns out that the noEmit field is set to true in the default extended config of the vue-ts template. We need to allow emitting type files by setting noEmit to false, and to avoid generating JavaScript files (which is handled by Vite), we set emitDeclarationOnly to true. Other fields above are also important.

Summary

All configurations must be set up correctly to make the package ready for publishing. The full sequence from scaffolding to publishing is included in the README.md if you want to follow along.

The Need For Speed to Deliver Your Website Fast

· 5 min read

Why The Need For Speed

Websites are viewed on browsers of all kinds, on devices of all kinds, and over network speeds of all kinds. These variables impact how differently a user can experience the same website. A notable difference in experience is the speed of delivery. Visually, this refers to how fast a page becomes fully loaded or interactive for a user. When there is significant delay, users may see a blank page loading until the entire rendered page suddenly appears. This is the main focus of this discussion.

Issue

The process of loading a website typically follows these steps:

  1. Address entered in the browser address bar
  2. Browser attempts to call the server for the required HTML/JS/CSS
  3. Browser loads the page, making it viewable by the user

Factors That May Impact the Process

  1. Server – It could be slow in handling user requests, taking a long time to retrieve or compute and return the necessary assets.
  2. Network – The speed of data transmission might be slow, limiting how fast data can be transferred.
  3. Asset Size – Larger files take longer to transmit, affecting load times.
  4. Browser – The browser's processing speed also influences how quickly a page can be rendered.

The server and browser are somewhat related, as are network speed and asset size. For example, a high-performance server may still be bottlenecked by a slow browser (device/compute dependent), and a fast network may still struggle with large asset sizes.

Ideally, optimizing all four factors (more powerful server, faster network, smaller assets, and a more capable browser) would eliminate bottlenecks. However, some components are harder to guarantee than others. For example:

  • Your browser might run on a powerful computer, but if the machine is under heavy load, performance will degrade.
  • Your network might be fast as advertised, but unexpected delays may occur if the upstream provider experiences issues or physical cables suffer damage from natural disasters.

There are many strategies to optimize delivery, such as caching, which may be built into a component or protocol to improve efficiency. When fine-tuning optimizations, it's crucial to assess the specific setup of your website and apply targeted improvements accordingly.

Practical Considerations

1. What Kind of Website?

The optimization approach varies based on the type of website.

For a vanilla website (simple HTML, JavaScript, and CSS), the primary concern is file size—smaller files lead to faster load times.

For framework-based websites (e.g., React), optimization depends on whether you're building a Single Page Application (SPA) or a Multi-Page Application (MPA):

  • MPA – Optimize at the page level, ensuring each page loads only the necessary code.
  • SPA – Optimize using code splitting (chunking), so that only required JavaScript loads for a given page, reducing initial load time.

While chunking reduces bundle size, it introduces a challenge—most chunking algorithms generate randomized file names to prevent collisions, making builds inconsistent. Manual chunking can help but requires careful management. A common strategy is to chunk based on third-party dependencies, keeping external libraries separate. However, this means any dependency update could change the bundle structure.

2. How to Reduce File Size

Common techniques for reducing file size include:

  1. Minification – Strips out unnecessary characters, such as comments and whitespace, from code.
  2. Compression – Reduces file size further using algorithms like Gzip or Brotli.

Compression can be applied at two levels:

  • Server-side – Configuring the server to perform compression on the fly when serving files.
  • Build-time – Generating pre-compressed files during the build process to be served when supported by the client.

While compression reduces network transfer time, decompression at the browser level introduces slight delays. If optimizing down to milliseconds, this tradeoff should be considered.

3. What Code-Level Changes Can Be Made?

Code-level optimizations focus on reducing unnecessary code in the final bundle. A key technique is tree shaking, which eliminates unused code during bundling.

Example: Importing only what's needed

import Button from '@mui/material/Button';

instead of importing the entire library:

import { Button, TextField } from '@mui/material';

According to the Material-UI documentation, if your dependencies are configured for tree shaking, top-level imports may still be optimized by the bundler.

  • In this talk by François Martin, he explains how some modules may not be tree-shakable due to using ES2015 imports with CommonJS exports. Since third-party dependencies are beyond our control, the only workaround in such cases may be forking or vendoring the dependency, though this is often a last resort.

4. What Tools Can Help?

Several tools can assist in optimizing and analyzing bundle size:

Summary

While optimizing website speed is crucial, it's only half the story. In real-world scenarios, additional UX measures such as loading placeholders, skeleton screens, or loading spinners can help mitigate slow load times.

Modern web frameworks also introduce factors like hydration and Flash of Unstyled Content (FOUC), which impact user experience. Instead of applying all optimizations blindly, it's best to identify bottlenecks and focus on the most impactful strategies for your setup.

Go defer Can Mess Up Your Intended Code Logic

· 4 min read

Usefulness of defer

defer is a Go feature that defers execution until after the function exits. It is not simply a way to move a statement from the current line to the last line of the enclosing function. defer can become problematic when multiple defer statements are involved (making execution order tricky to determine) or when it is used to enforce a logical execution order.

Two Common Pitfalls When Using defer

Misusing defer can lead to unintended consequences. A common mistake is invoking a function call without wrapping it in an anonymous function when capturing a dynamically changing value. For example:

package main

import (
"fmt"
)

func world(val string) {
fmt.Printf("%s from world", val)
}

func main() {
val := "hi"
defer world(val)
val = "hello"
fmt.Println("hello from main")
}

At the point of invoking defer world(val), the value of val is captured as "hi". Later changes to val do not affect this deferred function call, which can be undesirable.

One such undesirable scenario is passing an error object. If we declare var err error and attempt to defer funcName(err) or channelName <- err, the parameter err is immediately evaluated, but execution is delayed. This can result in sending an outdated error value.

To fix this, we can use an anonymous function:

package main

import (
"fmt"
)

func world(val string) {
fmt.Printf("%s from world", val)
}

func main() {
val := "hi"
defer func() {
world(val)
}()

ch := make(chan string, 1)
defer func() {
v := <-ch
fmt.Printf("%s from channel\n", v)
}()

defer func() {
ch <- val
}()

val = "hello"

fmt.Println("hello from main")
}

This produces:

hello from main
hello from channel
hello from world

Here, we deferred the channel send operation. If the channel is used to signal the completion of an entire operation, this ensures it triggers at the correct time.

Another example:

func lastOperation() {
fmt.Println("Doing something")
}

func main() {
ch := make(chan string, 1)
defer lastOperation()
ch <- "done"
}

The channel is notified before lastOperation() executes, making the logic incorrect. The last operation should be done before notifying completion, not the other way around.

One more noteworthy example on Reddit highlights how defer delays evaluation:

type A struct {
text string
}

func (a *A) Do() {
_ = a.text
}

func DoSomething() {
var a *A

defer a.Do()
// vs
// defer func() { a.Do() }()

a = &A{}
}

defer a.Do() causes a runtime panic because a is nil at the time of defer evaluation. However, using defer func() { a.Do() }() delays evaluation, allowing a to be assigned a valid value before execution.

Contrast it with:

type A struct {}

func (a *A) Do() {}

func DoSomething() {
var a *A

defer a.Do()

a = &A{}
}

Here, a is still a nil pointer at the time of defer evaluation, but since Do() does not dereference a, the call is safe.

To summarize, if function parameter evaluation is irrelevant, using defer funcName() is fine. Otherwise, wrap it in an anonymous function to delay evaluation.


The second common pitfall is to register defer statements too late in the function. This can result in them never executing if the function exits early (e.g., due to error handling).

Best practices:

  • Wrap the defer call in an anonymous function if necessary to prevent immediate parameter evaluation.
  • Place defer statements as early as possible (and logical) in the function to ensure they are registered before any early return logic.

More on Multiple defer

When multiple defer statements are used, their execution follows a stack-based order—Last In, First Out (LIFO). Deferred executions occur in reverse order from their placement in the function. Understanding this order is critical in cases like:

  • Ensuring consistent mutex unlocking sequences.
  • Correctly signaling completion in operations that depend on ordered execution.

Consider a structure where A is an operation and A.a is a sub-operation. Without defer, the correct order would be:

  • Send done to A.a's channel.
  • Send done to A's channel.

But, with a single defer:

  • (defer) Send done to A.a's channel.
  • Send done to A's channel.

This could lead to incorrect order (A is marked done before A.a).

A similar issue arises when both are deferred incorrectly:

  • (defer) Send done to A.a's channel.
  • (defer) Send done to A's channel.

Since defer follows LIFO, A is marked done before A.a.

Correcting defer order:

  • (defer) Send done to A's channel.
  • (defer) Send done to A.a's channel.

Now, A.a completes before A, ensuring the correct sequence.