Speeding up sideeffects with JIT in mountaineer

# June 25, 2025

Buttons, man. They're always just out there doing something. When's the last time you've met a button that doesn't make some change to your data? Exactly.

Most times when you click a button on the web, your browser sends a request to some remote backend, which in turn performs some logic and updates the database, and your API server pushes back a modified object so the frontend knows what to update. Most web frameworks force you to modify the state with the expected results locally on the frontend based on this response1.

Mountaineer provides a convention to make this a bit more seamless without the boilerplate. You put all your logic within a centralized render() on your backend. Your frontend gets access to this payload to layout its initial view. Any actions that modify the server state are decorated on the server with a @sideeffect. Mountaineer internally takes care of refetching the whole updated state for the page and passing it back to the frontend.2

But what about situations where this full re-render is too expensive and you want something more surgical? It actually has support for that too. When you mark a function as a sideeffect, you can tell the framework exactly which pieces of data you modified. We then use just-in-time compilation to create a ghost version of your render function that only computes the values you actually need to update.

Instead of recalculating your entire dashboard when you increment a single counter, Mountaineer traces through your code's dependency graph and executes only the 3 lines needed to return that counter's new value. Let's look at exactly how we do it.

What is JIT?

Just-In-Time compilation traditionally refers to a technique where code gets compiled to machine code at runtime rather than ahead of time. Languages like Java, C#, and modern Javascript engines like V8 use JIT compilation to optimize hot code paths based on actual runtime behavior3.

The "just-in-time" part means the compilation happens exactly when you need it, with full knowledge of the runtime context. A JIT compiler can make aggressive optimizations because it knows the specific types, values, and execution patterns of your running program. This is especially critical for an interpreted language where that typehinting isn't available ahead of time.

Mountaineer applies JIT principles at the web framework level instead of the interpreter level. When you tell Mountaineer that a sideeffect only modifies certain fields, it generates a specialized version of your render function at runtime - one that only computes those specific fields.

Let's consider this heavy render function.

# Your original render function
async def render(self) -> DashboardData:
    user_stats = await calculate_user_analytics()      # 500ms
    recent_posts = await fetch_recent_posts()          # 200ms  
    notifications = await get_notifications()          # 100ms
    profile_pic = await get_profile_picture()          # 50ms

    return DashboardData(
        user_stats=user_stats,
        recent_posts=recent_posts, 
        notifications=notifications,
        profile_pic=profile_pic
    )

That's 1.4 seconds of overhead, even if only the notifications changed because of our sideeffect. Wouldn't it be great if we could strip that complex function down into just what changed? Something like:

async def render_profile_pic_only(self) -> dict:
    profile_pic = await get_profile_picture()          # 50ms

    return {
        "profile_pic": profile_pic
    }

With JIT analysis we actually can.

In the default case, Mountaineer reloads everything because we can't intuit the relationship between your sideeffect and your render function. We have to treat every state change as potentially affecting everything, so we recompute everything to be safe. But if we can have an engineer (maybe you) guarantee that the sideeffect will only affect certain attributes, we can make some more aggressive cuts.

The JIT compiler analyzes your function's abstract syntax tree (AST), builds a dependency graph, and generates this specialized version automatically. You never write the optimized function by hand.

If you want to step through all the code side-by-side with this guide, I encourage you to do so.

AST analysis

Mountaineer solves this by performing static analysis on your Python code. When you specify which fields a sideeffect modifies, we trace backwards through your render function to find only the code paths needed to compute those fields.

@sideeffect(reload=(ProjectDashboardRender.notifications,), experimental_render_reload=True)
async def mark_notification_read(self, notification_id: int) -> None:
    await self.db.execute(
        "UPDATE notifications SET read = true WHERE id = ?", 
        notification_id
    )

The reload=(ProjectDashboardRender.notifications,) parameter tells Mountaineer that this sideeffect only affects the notifications field. For now this behavior is feature flagged under experimental_render_reload.

Here's what happens under the hood:

Step 1: Extract Return Expressions

We first transform your return statement to make dependency analysis easier. This allows us to refer to each output expression as a single variable that we can traverse a bit more cleanly as a string.

# Original return statement
return ProjectDashboardRender(
    project_stats=project_stats * 5,
    recent_tasks=recent_tasks / 2,
    notifications=notifications_old + notifications_new,
)

# After synthetic variable insertion  
return_synthetic_project_stats = project_stats * 5
return_synthetic_recent_tasks = recent_tasks / 2
return_synthetic_notifications = notifications_old + notifications_new

return ProjectDashboardRender(
    project_stats=return_synthetic_project_stats,
    recent_tasks=return_synthetic_recent_tasks,
    notifications=return_synthetic_notifications,
)

This transformation makes it easy to track which variables contribute to which output fields, by tracing through the intermediate synthetic variable.

Step 2: Build Dependency Graph

Adding to the acronym party here. An AST (Abstract Syntax Tree) is a symbolic representation of your code's structure, where each node represents a language primitive like a function call, variable assignment, or return statement. Consider this simple assignment: notifications = await self.get_notifications(project_id). In AST form, this becomes a tree of connected nodes:

# notifications = await self.get_notifications(project_id)
Assign(
    targets=[Name(id='notifications')],           # Left side: variable being assigned
    value=Await(                                  # Right side: awaited expression
        value=Call(                               # Function call
            func=Attribute(                       # Method access: self.get_notifications
                value=Name(id='self'),            # Object: self
                attr='get_notifications'          # Method: get_notifications
            ),
            args=[Name(id='project_id')]          # Arguments: [project_id]
        )
    )
)

The power of AST analysis lies in how these nodes reference each other, creating a web of dependencies.

When Mountaineer encounters the Name(id='project_id'):

  • it knows that the notifications variable depends on project_id

And when it sees notifications referenced in another expression like return_synthetic_notifications = notifications

  • it can trace that return_synthetic_notifications depends on notifications

The dependency graph emerges naturally by walking these node relationships: every Name node that appears in a value position creates a dependency edge to any Name node in a targets position.

    notifications = await self.get_notifications(project_id)
                                                     
    [Assign Node]                              [depends on]
                                                       
    [targets: notifications] ←─────────────── [args: project_id]
         
    [value: Await(...)]
         
    [Call to get_notifications]

Our next pipeline step is walking through your render() function's AST to map out variable dependencies:

{
    'return_synthetic_notifications': {'notifications'},
    'notifications': {'project_id'}, 
    'return_synthetic_project_stats': {'project_stats'},
    'project_stats': {'project_id'},
    'return_synthetic_recent_tasks': {'recent_tasks'},
    'recent_tasks': {'project_id'},
}

Step 3: Trace Required Variables

Starting from the target field (notifications), Mountaineer recursively finds all variables it depends on:

  • return_synthetic_notifications depends on notifications
  • notifications depends on project_id
  • project_id is a function parameter

So the minimal set of variables needed is: {project_id, notifications, return_synthetic_notifications}

Step 4: Generate Specialized Function

The ASTReducer creates a new version of your render function containing only the statements needed to compute those variables:

async def render_notifications_only(self, project_id: int) -> dict:
    notifications = await self.get_notifications(project_id)  # 100ms
    return_synthetic_notifications = notifications

    return {
        "notifications": return_synthetic_notifications
    }

The JIT-compiled function runs in 100ms instead of 1.4 seconds - a hypothetical 14x speedup given this render function's logic. The neat thing is this works for async and sync function calls alike. If no meaningful work is done for the payload that we care about, it's as if we never wrote the code in the first place.

Dependency tracing in action

Let's trace through a more complex example to see how the dependency analysis works. Most render functions won't fully delegate all of their function calls. Consider this render function with interdependent calculations all embedded inline:

async def render(self, user_id: int) -> AnalyticsRender:
    # Base data fetching
    raw_events = await self.fetch_user_events(user_id)
    user_profile = await self.get_user_profile(user_id)

    # Derived calculations
    total_events = len(raw_events)
    events_this_week = self.filter_events_by_week(raw_events)
    weekly_count = len(events_this_week)

    # Complex interdependent calculation
    engagement_score = self.calculate_engagement(
        total_events, 
        weekly_count, 
        user_profile.account_age
    )

    # Final aggregation
    summary_stats = self.generate_summary(
        total_events,
        engagement_score
    )

    return AnalyticsRender(
        raw_events=raw_events,
        total_events=total_events,
        weekly_count=weekly_count,
        engagement_score=engagement_score,
        summary_stats=summary_stats,
        user_profile=user_profile
    )

Now let's say we have a sideeffect that only modifies the user's profile picture:

@sideeffect(reload=(AnalyticsRender.user_profile,), experimental_render_reload=True)  
async def update_profile_picture(self, user_id: int, new_picture: str) -> None:
    await self.db.execute(
        "UPDATE users SET profile_picture = ? WHERE id = ?",
        new_picture, user_id
    )

When we trace backwards from user_profile, we discover it only depends on:

  • user_profileget_user_profile (function)
  • get_user_profileuser_id (function parameter)

The JIT compiler generates:

async def render_user_profile_only(self, user_id: int) -> dict:
    user_profile = await self.get_user_profile(user_id)
    return {"user_profile": user_profile}

All the complex event processing, engagement scoring, and summary generation gets stripped out. But what if we had marked a notification as read, which affects the engagement score calculation?

@sideeffect(reload=(AnalyticsRender.engagement_score,), experimental_render_reload=True)
async def mark_achievement_earned(self, user_id: int) -> None:
    # This changes the user's total event count, affecting engagement
    await self.db.execute(
        "INSERT INTO user_events (user_id, event_type) VALUES (?, 'achievement')",
        user_id
    )

Now the dependency trace is much more complex:

  • engagement_scoretotal_events, weekly_count, user_profile.account_age
  • total_eventsraw_events
  • weekly_countevents_this_week
  • events_this_weekraw_events
  • user_profile.account_ageuser_profile
  • raw_eventsuser_id
  • user_profileuser_id

The JIT-compiled function includes all the necessary computation:

async def render_engagement_score_only(self, user_id: int) -> dict:
    raw_events = await self.fetch_user_events(user_id)
    user_profile = await self.get_user_profile(user_id)

    total_events = len(raw_events)
    events_this_week = self.filter_events_by_week(raw_events)
    weekly_count = len(events_this_week)

    engagement_score = self.calculate_engagement(
        total_events,
        weekly_count, 
        user_profile.account_age
    )

    return {"engagement_score": engagement_score}

Notice that summary_stats calculation gets stripped out since it's not needed for engagement_score. The system is surgical. It includes exactly what's needed, nothing more.

When JIT compilation shines

JIT render optimization works best when your application has:

Heavy Render Functions: If your render function is already fast (< 100ms), the optimization overhead might not be worth it. JIT compilation adds a small startup cost to analyze and compile the specialized function.

Clear Data Boundaries: The technique works best when different parts of your UI correspond to distinct data sources. If everything depends on everything else, there's less opportunity for optimization.

Frequent Targeted Updates: Applications with lots of small, specific user interactions (like social media likes, shopping cart updates, or notification management) see the biggest benefits.

The future of surgical updates

The Mountaineer JIT is in beta for a reason. It doesn't work for very complex render functions and AST trees. I've tested branching logic and nested loops and they work fine, but the second you introduce closures and nonlocal variables things get tricky fast. But I'm treating it as a solid proof of concept of ways we can transparency make webapps faster, with only minor changes required to your application over time.

The JIT trend is growing in other areas. Scientific computation has been focused on it for awhile4. React's upcoming compiler takes a similar approach at the component level, automatically optimizing re-renders by analyzing component dependencies. Svelte's philosophy has always been about generating targeted updates rather than general-purpose runtime code.

Mountaineer extends this concept to the full-stack level. The framework understands the relationship between your database changes, server-side logic, and frontend state updates. It can optimize not just what gets sent to the browser but what gets computed on the server in the first place.

Web applications spend too much time recomputing data that hasn't changed. JIT compilation at the framework level makes surgical precision the default, not an optimization you have to build yourself.

If you're building applications where render performance matters, give Mountaineer's JIT optimization a try. You might be surprised how much faster your app feels when it stops doing unnecessary work.


  1. You can also optimistically update your frontend state as if the change was successful and rollback if it wasn't. 

  2. All using standard POST/GET requests. It feels good to be using battle tested http constructs. 

  3. If you want to get really pedantic, Java and C# compile to bytecode first and then compile hot loops to machine code. 

  4. Farming out to a multi-million dollar remote compute cluster will do that to a person. 

/dev/newsletter

Technical deep dives on machine learning research, engineering systems, and building scalable products. Published weekly.

Unsubscribe anytime. No spam, promise.