Baeldung Pro – Ops – NPI EA (cat = Baeldung on Ops)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

1. Overview

Running a shell script on every HTTP request in Nginx presents unique challenges since Nginx doesn’t natively support direct shell script execution. However, several workarounds exist that we can implement depending on our specific needs.

Common use cases include triggering LED indicators on embedded devices, logging custom metrics, invalidating caches, or sending notifications. Moreover, each approach we’ll explore has different performance characteristics and security implications.

In this tutorial, we’ll examine multiple methods to execute shell scripts when Nginx receives requests. In particular, we’ll explore methods such as using the Lua module, FastCGI wrappers, and the mirror module.

2. Understanding the Challenge

Nginx operates as an event-driven, non-blocking web server. Therefore, it deliberately avoids spawning processes or executing external commands directly from its core configuration. As a result, this design choice maximizes performance and stability.

When we introduce shell script execution on every request, we face several concerns. First, spawning a new process for each request significantly impacts performance.

Additionally, executing arbitrary shell commands opens potential security vulnerabilities. Finally, handling script errors and timeouts becomes critical to prevent blocking the request pipeline.

Despite these challenges, legitimate use cases exist where the benefits outweigh the costs. Therefore, let’s explore the available solutions.

3. Using the HttpLuaModule

The HttpLuaModule provides the most direct way to execute shell scripts from within Nginx. This module embeds Lua into Nginx, allowing us to run Lua code that can spawn system processes. Let’s examine how to set up and use the module.

3.1. Installing HttpLuaModule

The HttpLuaModule (also known as lua-nginx-module) enables Lua scripting within Nginx. On Ubuntu/Debian systems, we can install it through the apt-get command:

$ sudo apt-get install libnginx-mod-http-lua

Alternatively, we can install nginx-extras, which includes the Lua module along with other useful modules:

$ sudo apt-get install nginx-extras

For CentOS/RHEL systems, we can use the yum command:

$ sudo yum install nginx-mod-http-lua

However, for systems where these packages aren’t available, we might need to compile Nginx with the Lua module from source or use OpenResty (which comes with Lua support built-in).

After installation, the module should be automatically loaded. For manual loading, we can add this to our nginx.conf file:

load_module modules/ngx_http_lua_module.so;

This configuration tells Nginx to load the Lua module at startup. We can verify the module loaded correctly by checking the Nginx configuration with nginx -t and looking for any module-related errors.

3.2. Executing Shell Scripts With Lua

Once installed, we can execute shell scripts using Lua’s os.execute() function. Here’s a basic implementation:

location /trigger {
    content_by_lua_block {
        os.execute("/path/to/our/script.sh")
        ngx.say("Script executed")
    }
}

In this example, the configuration executes the specified shell script whenever someone accesses the /trigger endpoint. The script runs synchronously, and the response returns after completion.

When we access this endpoint, we’ll see the output:

Script executed

Alternatively, for more control over script output, we can use io.popen():

location /run-script {
    content_by_lua_block {
        local handle = io.popen("date +'%Y-%m-%d %H:%M:%S'")
        local result = handle:read("*a")
        handle:close()
        
        ngx.header.content_type = "text/plain"
        ngx.say("Current time: ", result)
    }
}

This code executes the date command and captures its output. In this case, the script reads the complete output and includes it in the HTTP response.

This produces an output like:

Current time: 2025-06-15 14:30:45

We can also pass request parameters to our scripts:

location /process {
    content_by_lua_block {
        local args = ngx.var.arg_param or "default"
        local cmd = string.format("echo 'Processing: %s'", args)
        local handle = io.popen(cmd)
        local result = handle:read("*a")
        handle:close()
        
        ngx.say(result)
    }
}

This configuration extracts the param query parameter from the request and passes it to a shell command. The script safely formats the command string and executes it.

Accessing /process?param=test returns:

Processing: test

These examples demonstrate how Lua provides flexible script execution capabilities within Nginx, from simple command execution to capturing output and processing request parameters.

3.3. Limitations and Considerations

Despite its advantages, the Lua approach has notable limitations. Standard error from os.execute() goes to the Nginx error log, while standard output gets discarded. Additionally, the function returns only the subprocess exit code.

Furthermore, blocking script execution can delay request processing. However, we can use Nginx’s lua_socket_connect_timeout and related directives to prevent hanging requests.

Consequently, we need to always implement proper error handling to ensure script failures don’t crash our Nginx workers.

4. FastCGI Approach With fcgiwrap

As an alternative, FastCGI offers a more traditional approach to executing scripts in web servers. In particular, by using fcgiwrap, we can execute shell scripts as if they were CGI programs, thus providing a familiar interface for developers who’ve worked with CGI in the past.

4.1. Setting Up fcgiwrap

FastCGI provides another method to execute shell scripts. First, we install fcgiwrap:

$ sudo apt-get install fcgiwrap

This command installs fcgiwrap and its dependencies. Furthermore, the package typically includes a systemd service that starts automatically.

After installation, the fcgiwrap service creates a Unix socket at /var/run/fcgiwrap.socket. We can verify it’s running:

$ sudo systemctl status fcgiwrap
\u25cf fcgiwrap.service - Simple CGI Server
     Loaded: loaded (/usr/lib/systemd/system/fcgiwrap.service; indirect; preset: enabled)
     Active: active (running) since Sat 2025-06-14 12:09:14 UTC; 4s ago
TriggeredBy: \u25cf fcgiwrap.socket
   Main PID: 4914 (fcgiwrap)
      Tasks: 1 (limit: 9366)
     Memory: 320.0K (peak: 584.0K)
        CPU: 2ms
     CGroup: /system.slice/fcgiwrap.service
             \u2514\u25004914 /usr/sbin/fcgiwrap -f

We would see output indicating the service is active and running, confirming fcgiwrap is ready to process requests.

4.2. Configuring Nginx for FastCGI

Next, we configure Nginx to pass requests to fcgiwrap. Here’s a comprehensive configuration:

location ~ \.(sh|pl|py)$ {
    gzip off;
    root /var/www/scripts;
    
    fastcgi_pass unix:/var/run/fcgiwrap.socket;
    
    include /etc/nginx/fastcgi_params;
    fastcgi_param DOCUMENT_ROOT /var/www/scripts;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}

This configuration matches requests ending in .sh, .pl, or .py and forwards them to fcgiwrap. In addition, the FastCGI parameters provide environment variables that scripts can access.

Next, our shell scripts must be executable and include proper headers. Let’s create /var/www/scripts/info.sh:

$ cat info.sh
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "<html><body>"
echo "<h1>Request received at $(date)</h1>"
echo "<p>Client IP: $REMOTE_ADDR</p>"
echo "<p>Request URI: $REQUEST_URI</p>"
echo "</body></html>"

This script outputs valid HTTP headers followed by HTML content. Meanwhile, the script accesses FastCGI environment variables like $REMOTE_ADDR and $REQUEST_URI.

Let’s not forget to set proper permissions:

$ sudo chmod 755 /var/www/scripts/info.sh

Essentially, this command makes the script executable, which is required for fcgiwrap to run it.

Accessing /info.sh produces:

<html><body>
<h1>Request received at Sat Jun 15 14:45:30 UTC 2025</h1>
<p>Client IP: 192.168.1.100</p>
<p>Request URI: /info.sh</p>
</body></html>
This approach provides excellent compatibility with existing CGI scripts and clear output formatting.

4.3. Handling Different Script Types

We can extend our configuration to support multiple scripting languages:

location ~ \.(cgi|sh|pl|py|rb)$ {
    gzip off;
    root /var/www/scripts;
    
    fastcgi_pass unix:/var/run/fcgiwrap.socket;
    fastcgi_index index.cgi;
    
    include /etc/nginx/fastcgi_params;
    fastcgi_param DOCUMENT_ROOT /var/www/scripts;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    
    fastcgi_param QUERY_STRING $query_string;
    fastcgi_param REQUEST_METHOD $request_method;
    fastcgi_param CONTENT_TYPE $content_type;
    fastcgi_param CONTENT_LENGTH $content_length;
}

This enhanced configuration supports CGI, shell, Perl, Python, and Ruby scripts. In this configuration, we’ve added additional FastCGI parameters to pass more request information to the scripts.

Here’s a Python example that reads POST data (/var/www/scripts/process.py):

$ cat process.py
#!/usr/bin/env python3
import sys
import os

print("Content-type: text/plain\n")
print(f"Method: {os.environ.get('REQUEST_METHOD', 'Unknown')}")
print(f"Query: {os.environ.get('QUERY_STRING', 'None')}")

if os.environ.get('REQUEST_METHOD') == 'POST':
    content_length = int(os.environ.get('CONTENT_LENGTH', 0))
    post_data = sys.stdin.read(content_length)
    print(f"POST data: {post_data}")

This Python script demonstrates accessing various FastCGI environment variables and reading POST data from standard input.

As a result, the script outputs plain text showing the request method, query string, and any POST data received.

Finally, we should make the script executable:

$ sudo chmod 755 /var/www/scripts/process.py

The flexibility of this approach makes FastCGI an excellent choice when supporting multiple scripting languages or migrating legacy CGI applications.

5. Using Nginx Mirror Module

In contrast to the previous methods, the mirror module takes a different approach by creating asynchronous background requests. This method excels when we don’t need the script output in the response, making it perfect for logging, metrics collection, or triggering background tasks.

5.1. How Mirror Module Works

The mirror module creates background subrequests for specified URIs. Since responses to mirror subrequests are ignored, this approach works well for fire-and-forget scenarios. Importantly, the main request continues processing immediately without waiting for the mirror request to complete.

5.2. Implementing Mirror-Based Script Execution

Here’s an example of how we can configure mirroring:

location / {
    mirror /mirror;
    mirror_request_body off;
    proxy_pass http://backend;
}

location = /mirror {
    internal;
    proxy_pass http://localhost:8888/execute-script;
    proxy_pass_request_body off;
    proxy_set_header Content-Length "";
    proxy_set_header X-Original-URI $request_uri;
}

This configuration creates a mirror subrequest to /mirror for every request to /. The internal directive ensures the mirror location can’t be accessed directly.

Moreover, we disable request body passing for performance and add a custom header with the original URI.

Additionally, we then need a lightweight service listening on port 8888 to execute our scripts. Here’s a simple Python implementation (script_server.py):

$ cat script_server.py
#!/usr/bin/env python3
from http.server import HTTPServer, BaseHTTPRequestHandler
import subprocess
import threading

class ScriptHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        self._handle_request()
    
    def do_POST(self):
        self._handle_request()
    
    def _handle_request(self):
        original_uri = self.headers.get('X-Original-URI', '')
        
        def run_script():
            subprocess.run(['/opt/scripts/log-request.sh', original_uri])
        
        thread = threading.Thread(target=run_script)
        thread.daemon = True
        thread.start()
        
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b'OK')

if __name__ == '__main__':
    server = HTTPServer(('localhost', 8888), ScriptHandler)
    print("Script server listening on port 8888")
    server.serve_forever()

As illustrated above, this server handles incoming requests by extracting the original URI from headers and spawning a background thread to execute the shell script. The server responds immediately without waiting for script completion.

Furthermore, let’s examine the corresponding shell script (/opt/scripts/log-request.sh):

$ cat log-request.sh
#!/bin/bash
echo "$(date): Request to $1" >> /var/log/nginx-requests.log

This script appends a timestamp and the requested URI to a log file. Since it runs asynchronously, it won’t delay the main request.

Finally, we make the script executable:

$ sudo chmod 755 /opt/scripts/log-request.sh

Overall, this setup ensures that script execution never blocks the main request, thereby providing excellent performance for high-traffic scenarios.

6. Conclusion

We’ve explored three methods to execute shell scripts on every Nginx request: the Lua module for direct execution with output capture, FastCGI for multi-language support and CGI compatibility, and the mirror module for asynchronous fire-and-forget operations.

Each approach offers different trade-offs between performance and functionality. We should keep in mind that executing scripts on every request impacts performance and security. Therefore, we must implement rate limiting, timeouts, and input validation to protect our system.