You finish upgrading your Wazuh stack — indexer, manager, dashboard, all green. You log in. The agent list is empty. The Security section says “You have no permissions.” Your user account is a global OpenSearch admin. Nothing in the logs looks wrong.
This is that bug.
How Wazuh Dashboard authentication actually works
There are two separate permission systems in play, and confusing them is easy.
Browser
└─ OpenSearch session (cookie)
└─ Wazuh Dashboard plugin
├─ OpenSearch queries (alerts, indices) ← OpenSearch RBAC
└─ Wazuh Manager API (agents, rules, …) ← Wazuh RBACWhen run_as: true is configured in wazuh.yml, every time a user loads the
dashboard the plugin authenticates to the Wazuh Manager API as the service
account wazuh-wui, then immediately calls:
POST /security/user/authenticate/run_aswith the logged-in user’s full OpenSearch identity in the body:
{
"user_name": "alice.admin",
"backend_roles": ["admin"],
"roles": ["own_index", "all_access"],
...
}The manager evaluates its own RBAC rules against that body to decide what
Wazuh role — and therefore what policies — to embed in the JWT it returns. If no
rule matches, the JWT contains rbac_roles: [] and the user silently has zero
access to anything managed by the Wazuh API.
The symptom and why it is so confusing
With an empty role list the API does not return 403 Forbidden. It returns
200 OK with an empty result set. The agent list renders with zero rows. The
Security section checks for security:read on user:id:* and role:id:*,
finds nothing, and shows the “no permissions” banner.
Everything looks operational. The API is up. Authentication succeeds. Logs are clean. You just cannot see anything.
Root cause part 1 — the default rules do not match non-default usernames
Out of the box, Wazuh ships exactly two RBAC rules that map OpenSearch identities
to the administrator role:
| ID | Name | Rule |
|---|---|---|
| 1 | wui_elastic_admin | {"FIND": {"username": "elastic"}} |
| 2 | wui_opensearch_admin | {"FIND": {"user_name": "admin"}} |
These cover the built-in elastic and admin OpenSearch users. Anyone whose
username is anything else — including usernames that contain the word “admin” —
matches neither rule.
The FIND operator in Wazuh RBAC rules performs exact value matching, not
substring search. Wazuh’s source confirms this: FIND resolves to MATCH, which
compares strings by equality after converting them to single-element lists.
alice.admin != admin.
The fix is a third rule that matches on backend_roles rather than user_name:
{"FIND": {"backend_roles": "admin"}}backend_roles is an array. FIND checks for element membership in arrays, so
this correctly matches any OpenSearch user who has admin in their backend role
list — regardless of what their username is.
Root cause part 2 — the rule disappears on every upgrade
Add the rule, confirm it works, upgrade again a few months later, and the symptoms are back. That is what makes this a recurring problem rather than a one-time fix.
Wazuh’s manager runs a database migration on first start after a new container image is pulled. The migration reconstructs all built-in records from bundled YAML files and then carries over custom data — but only for rows above a reserved ID boundary:
# wazuh/rbac/orm.py
MAX_ID_RESERVED = 99
db_manager.migrate_data(
source=DB_FILE,
target=DB_FILE_TMP,
from_id=MAX_ID_RESERVED + 1 # only IDs ≥ 100 survive
)Any custom rule inserted with an ID of 1–99 is silently recreated from defaults and your changes are gone. The SQLite auto-increment counter starts from the highest existing ID, so a freshly inserted rule typically lands at ID 3 — right in the danger zone.
The fix
Insert the rule at ID 100 or higher. Run this against both
wazuh-manager-master-0 and wazuh-manager-worker-0 (each pod has its own copy
of the RBAC database):
kubectl exec -n wazuh <pod> -- python3 << 'EOF'
import sqlite3, datetime, json
DB = '/var/ossec/api/configuration/security/rbac.db'
RULE_ID = 100
RULE_NAME = 'wui_opensearch_admin_backend'
RULE_JSON = json.dumps({'FIND': {'backend_roles': 'admin'}})
ROLE_ID = 1 # administrator
conn = sqlite3.connect(DB)
cur = conn.cursor()
# Clean up any stale copy at a reserved ID
cur.execute('DELETE FROM roles_rules WHERE rule_id IN '
'(SELECT id FROM rules WHERE name = ? AND id != ?)', (RULE_NAME, RULE_ID))
cur.execute('DELETE FROM rules WHERE name = ? AND id != ?', (RULE_NAME, RULE_ID))
cur.execute('SELECT id FROM rules WHERE id = ?', (RULE_ID,))
if cur.fetchone():
print(f'Rule {RULE_ID} already present')
else:
now = datetime.datetime.utcnow().isoformat()
cur.execute('INSERT INTO rules (id, name, rule, created_at) VALUES (?, ?, ?, ?)',
(RULE_ID, RULE_NAME, RULE_JSON, now))
cur.execute('INSERT INTO roles_rules (role_id, rule_id, created_at) VALUES (?, ?, ?)',
(ROLE_ID, RULE_ID, now))
conn.commit()
print(f'Rule {RULE_ID} created and linked to administrator role')
EOFTip
The Wazuh API does not need to be restarted. The RBAC engine reads from the database on each token issuance, so the rule takes effect immediately for new logins.
Why direct SQLite access instead of the API? After an upgrade resets the
database, the wazuh-wui password hash in the database may no longer match the
value stored in your Kubernetes secret. The API will return 401 and you cannot
authenticate to fix anything through it. kubectl exec bypasses that cleanly.
After applying the fix — users must re-login
The Wazuh API bakes rbac_roles into the JWT at the moment it is issued.
Existing tokens carry the old empty role list and there is no in-band way to
refresh them short of waiting for expiry (default: 15 minutes) or revoking them
explicitly.
The fastest path is a fresh login. If the normal dashboard logout does not
trigger a new run_as call — which can happen when the OpenSearch session cookie
persists and the dashboard reuses cached state — use a private browser window to
force a clean authentication.
Prevention — automate it in your upgrade pipeline
Knowing the root cause makes the prevention obvious: re-apply the rule as the last step of every manager upgrade. The snippet above is idempotent, so there is no risk in running it unconditionally.
In a Kubernetes upgrade script this looks like:
ensure_rbac_rules() {
local PYTHON_SCRIPT
PYTHON_SCRIPT=$(cat << 'PYEOF'
import sqlite3, datetime, json
DB = '/var/ossec/api/configuration/security/rbac.db'
RULE_ID = 100
RULE_NAME = 'wui_opensearch_admin_backend'
RULE_JSON = json.dumps({'FIND': {'backend_roles': 'admin'}})
conn = sqlite3.connect(DB)
cur = conn.cursor()
cur.execute('DELETE FROM roles_rules WHERE rule_id IN '
'(SELECT id FROM rules WHERE name = ? AND id != ?)', (RULE_NAME, RULE_ID))
cur.execute('DELETE FROM rules WHERE name = ? AND id != ?', (RULE_NAME, RULE_ID))
cur.execute('SELECT id FROM rules WHERE id = ?', (RULE_ID,))
if cur.fetchone():
print(f'Rule {RULE_ID} already present')
else:
now = datetime.datetime.utcnow().isoformat()
cur.execute('INSERT INTO rules (id, name, rule, created_at) VALUES (?, ?, ?, ?)',
(RULE_ID, RULE_NAME, RULE_JSON, now))
cur.execute('INSERT INTO roles_rules (role_id, rule_id, created_at) VALUES (1, ?, ?)',
(RULE_ID, now))
conn.commit()
print(f'Rule {RULE_ID} created')
PYEOF
)
for POD in wazuh-manager-master-0 wazuh-manager-worker-0; do
kubectl -n wazuh exec "$POD" -- python3 -c "$PYTHON_SCRIPT"
done
}
# Call after rollout_status completes
ensure_rbac_rulesTakeaways
-
Wazuh has two independent permission systems. OpenSearch RBAC controls index access. Wazuh RBAC controls the Manager API. A user can be an OpenSearch superuser and still have zero Wazuh API permissions.
-
FINDdoes exact matching on strings, element matching on arrays. A rule targetinguser_name: "admin"will not matchalice.admin. Targetbackend_roles: "admin"instead to catch all users with that role regardless of their username. -
RBAC rules with ID ≤ 99 are overwritten on every upgrade. This is intentional behaviour in Wazuh’s migration system. Insert custom rules at ID ≥ 100.
-
Automate the re-application. Add an idempotent post-upgrade step to your pipeline. The rule is cheap to check and insert. Diagnosing the missing rule again from scratch is not.
-
When RBAC is wrong, the API returns 200 with empty data. This is by design — Wazuh filters results to the resources your policies cover, and an empty policy set means empty results. Do not spend time looking for errors that will never appear.