Skip to main content

The Need For Speed to Deliver Your Website Fast

· 5 min read

Why The Need For Speed

Websites are viewed on browsers of all kinds, on devices of all kinds, and over network speeds of all kinds. These variables impact how differently a user can experience the same website. A notable difference in experience is the speed of delivery. Visually, this refers to how fast a page becomes fully loaded or interactive for a user. When there is significant delay, users may see a blank page loading until the entire rendered page suddenly appears. This is the main focus of this discussion.

Issue

The process of loading a website typically follows these steps:

  1. Address entered in the browser address bar
  2. Browser attempts to call the server for the required HTML/JS/CSS
  3. Browser loads the page, making it viewable by the user

Factors That May Impact the Process

  1. Server – It could be slow in handling user requests, taking a long time to retrieve or compute and return the necessary assets.
  2. Network – The speed of data transmission might be slow, limiting how fast data can be transferred.
  3. Asset Size – Larger files take longer to transmit, affecting load times.
  4. Browser – The browser's processing speed also influences how quickly a page can be rendered.

The server and browser are somewhat related, as are network speed and asset size. For example, a high-performance server may still be bottlenecked by a slow browser (device/compute dependent), and a fast network may still struggle with large asset sizes.

Ideally, optimizing all four factors (more powerful server, faster network, smaller assets, and a more capable browser) would eliminate bottlenecks. However, some components are harder to guarantee than others. For example:

  • Your browser might run on a powerful computer, but if the machine is under heavy load, performance will degrade.
  • Your network might be fast as advertised, but unexpected delays may occur if the upstream provider experiences issues or physical cables suffer damage from natural disasters.

There are many strategies to optimize delivery, such as caching, which may be built into a component or protocol to improve efficiency. When fine-tuning optimizations, it's crucial to assess the specific setup of your website and apply targeted improvements accordingly.

Practical Considerations

1. What Kind of Website?

The optimization approach varies based on the type of website.

For a vanilla website (simple HTML, JavaScript, and CSS), the primary concern is file size—smaller files lead to faster load times.

For framework-based websites (e.g., React), optimization depends on whether you're building a Single Page Application (SPA) or a Multi-Page Application (MPA):

  • MPA – Optimize at the page level, ensuring each page loads only the necessary code.
  • SPA – Optimize using code splitting (chunking), so that only required JavaScript loads for a given page, reducing initial load time.

While chunking reduces bundle size, it introduces a challenge—most chunking algorithms generate randomized file names to prevent collisions, making builds inconsistent. Manual chunking can help but requires careful management. A common strategy is to chunk based on third-party dependencies, keeping external libraries separate. However, this means any dependency update could change the bundle structure.

2. How to Reduce File Size

Common techniques for reducing file size include:

  1. Minification – Strips out unnecessary characters, such as comments and whitespace, from code.
  2. Compression – Reduces file size further using algorithms like Gzip or Brotli.

Compression can be applied at two levels:

  • Server-side – Configuring the server to perform compression on the fly when serving files.
  • Build-time – Generating pre-compressed files during the build process to be served when supported by the client.

While compression reduces network transfer time, decompression at the browser level introduces slight delays. If optimizing down to milliseconds, this tradeoff should be considered.

3. What Code-Level Changes Can Be Made?

Code-level optimizations focus on reducing unnecessary code in the final bundle. A key technique is tree shaking, which eliminates unused code during bundling.

Example: Importing only what's needed

import Button from '@mui/material/Button';

instead of importing the entire library:

import { Button, TextField } from '@mui/material';

According to the Material-UI documentation, if your dependencies are configured for tree shaking, top-level imports may still be optimized by the bundler.

  • In this talk by François Martin, he explains how some modules may not be tree-shakable due to using ES2015 imports with CommonJS exports. Since third-party dependencies are beyond our control, the only workaround in such cases may be forking or vendoring the dependency, though this is often a last resort.

4. What Tools Can Help?

Several tools can assist in optimizing and analyzing bundle size:

Summary

While optimizing website speed is crucial, it's only half the story. In real-world scenarios, additional UX measures such as loading placeholders, skeleton screens, or loading spinners can help mitigate slow load times.

Modern web frameworks also introduce factors like hydration and Flash of Unstyled Content (FOUC), which impact user experience. Instead of applying all optimizations blindly, it's best to identify bottlenecks and focus on the most impactful strategies for your setup.

Go defer Can Mess Up Your Intended Code Logic

· 4 min read

Usefulness of defer

defer is a Go feature that defers execution until after the function exits. It is not simply a way to move a statement from the current line to the last line of the enclosing function. defer can become problematic when multiple defer statements are involved (making execution order tricky to determine) or when it is used to enforce a logical execution order.

Two Common Pitfalls When Using defer

Misusing defer can lead to unintended consequences. A common mistake is invoking a function call without wrapping it in an anonymous function when capturing a dynamically changing value. For example:

package main

import (
"fmt"
)

func world(val string) {
fmt.Printf("%s from world", val)
}

func main() {
val := "hi"
defer world(val)
val = "hello"
fmt.Println("hello from main")
}

At the point of invoking defer world(val), the value of val is captured as "hi". Later changes to val do not affect this deferred function call, which can be undesirable.

One such undesirable scenario is passing an error object. If we declare var err error and attempt to defer funcName(err) or channelName <- err, the parameter err is immediately evaluated, but execution is delayed. This can result in sending an outdated error value.

To fix this, we can use an anonymous function:

package main

import (
"fmt"
)

func world(val string) {
fmt.Printf("%s from world", val)
}

func main() {
val := "hi"
defer func() {
world(val)
}()

ch := make(chan string, 1)
defer func() {
v := <-ch
fmt.Printf("%s from channel\n", v)
}()

defer func() {
ch <- val
}()

val = "hello"

fmt.Println("hello from main")
}

This produces:

hello from main
hello from channel
hello from world

Here, we deferred the channel send operation. If the channel is used to signal the completion of an entire operation, this ensures it triggers at the correct time.

Another example:

func lastOperation() {
fmt.Println("Doing something")
}

func main() {
ch := make(chan string, 1)
defer lastOperation()
ch <- "done"
}

The channel is notified before lastOperation() executes, making the logic incorrect. The last operation should be done before notifying completion, not the other way around.

One more noteworthy example on Reddit highlights how defer delays evaluation:

type A struct {
text string
}

func (a *A) Do() {
_ = a.text
}

func DoSomething() {
var a *A

defer a.Do()
// vs
// defer func() { a.Do() }()

a = &A{}
}

defer a.Do() causes a runtime panic because a is nil at the time of defer evaluation. However, using defer func() { a.Do() }() delays evaluation, allowing a to be assigned a valid value before execution.

Contrast it with:

type A struct {}

func (a *A) Do() {}

func DoSomething() {
var a *A

defer a.Do()

a = &A{}
}

Here, a is still a nil pointer at the time of defer evaluation, but since Do() does not dereference a, the call is safe.

To summarize, if function parameter evaluation is irrelevant, using defer funcName() is fine. Otherwise, wrap it in an anonymous function to delay evaluation.


The second common pitfall is to register defer statements too late in the function. This can result in them never executing if the function exits early (e.g., due to error handling).

Best practices:

  • Wrap the defer call in an anonymous function if necessary to prevent immediate parameter evaluation.
  • Place defer statements as early as possible (and logical) in the function to ensure they are registered before any early return logic.

More on Multiple defer

When multiple defer statements are used, their execution follows a stack-based order—Last In, First Out (LIFO). Deferred executions occur in reverse order from their placement in the function. Understanding this order is critical in cases like:

  • Ensuring consistent mutex unlocking sequences.
  • Correctly signaling completion in operations that depend on ordered execution.

Consider a structure where A is an operation and A.a is a sub-operation. Without defer, the correct order would be:

  • Send done to A.a's channel.
  • Send done to A's channel.

But, with a single defer:

  • (defer) Send done to A.a's channel.
  • Send done to A's channel.

This could lead to incorrect order (A is marked done before A.a).

A similar issue arises when both are deferred incorrectly:

  • (defer) Send done to A.a's channel.
  • (defer) Send done to A's channel.

Since defer follows LIFO, A is marked done before A.a.

Correcting defer order:

  • (defer) Send done to A's channel.
  • (defer) Send done to A.a's channel.

Now, A.a completes before A, ensuring the correct sequence.

Go Nil Pointer Dereference Problem with FindXXX

· 3 min read

The Problem

A nil pointer dereference is a well-known runtime error to avoid. The cause is simple to explain: a pointer to a struct is passed to or returned from a function. Accessing the struct's fields or methods can cause a runtime panic if the pointer turns out to be nil.

The FindXXX Pattern

In our code logic, there are often cases where an identifier maps to an in-memory representation of a struct or object. These objects are frequently stored in a map, but they can also be retrieved or reconstructed from a file or an endpoint. When this object needs to be passed to relevant functions, instead of passing the object directly, some methods may rely on passing the identifier instead.

For example, in the context of a library system:

package main

import "fmt"

type Book struct {
Title string
Author string
}

type Library struct {
books map[string]*Book
}

func (l *Library) AddBook(title, author string) {
l.books[title] = &Book{Title: title, Author: author}
}

func (l *Library) FindBook(title string) *Book {
if book, ok := l.books[title]; !ok {
return nil
} else {
return book
}
}

func main() {
library := Library{books: make(map[string]*Book)}
library.AddBook("Your Code as a Crime Scene", "Adam Tornhill")

// exist and well
book := library.FindBook("Your Code as a Crime Scene")

fmt.Printf("Book: %s, Author: %s\n", book.Title, book.Author)

// not exist and panic
book = library.FindBook("The Phoenix Project")

fmt.Printf("Book: %s, Author: %s\n", book.Title, book.Author)
}

In the code above, FindBook provides a way to retrieve a book representation using the book title. The problem with this design is that nil checks are not enforced by the compiler, which can lead to carelessness in validating the returned object before accessing its fields.

The runtime panic:

Book: Your Code as a Crime Scene, Author: Adam Tornhill
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x100e81e0c]

goroutine 1 [running]:
main.main()
/Users/yong/Documents/GitHub/learn-go/library/main.go:46 +0x1bc
exit status 2

The Fix

In other languages, one might simply throw an exception when the requested object does not exist. However, in Go, returning an error is generally preferred as it provides a clearer indication of how the method should be used.

Of course, documenting the function with a comment to indicate the need for nil checks is better than nothing, but a more robust solution is to return an error explicitly:

func (l *Library) GetBook(title string) (*Book, error) {
if book, ok := l.books[title]; !ok {
return nil, fmt.Errorf("Book not found")
} else {
return book, nil
}
}

Summary

While this issue may seem trivial, it is more widespread and insidious than one might think. There are many scenarios where a FindXXX pattern (if such a term exists) can lead to the slippery slope of hidden nil pointer dereferences in the codebase.

Not returning an error and instead relying on a nil pointer is one part of the problem. The other issue is the practice of passing around identifiers, which leads to a loss of type safety—but that’s a topic for another day.