THE H HIERARCHY - SEEING WHAT CANNOT BE SOLVED

THE H HIERARCHY - SEEING WHAT CANNOT BE SOLVEDTHE H HIERARCHY - SEEING WHAT CANNOT BE SOLVEDTHE H HIERARCHY - SEEING WHAT CANNOT BE SOLVED
Home
PT 1
PT 2
PT 3
PT 4

THE H HIERARCHY - SEEING WHAT CANNOT BE SOLVED

THE H HIERARCHY - SEEING WHAT CANNOT BE SOLVEDTHE H HIERARCHY - SEEING WHAT CANNOT BE SOLVEDTHE H HIERARCHY - SEEING WHAT CANNOT BE SOLVED
Home
PT 1
PT 2
PT 3
PT 4
More
  • Home
  • PT 1
  • PT 2
  • PT 3
  • PT 4
  • Home
  • PT 1
  • PT 2
  • PT 3
  • PT 4

IF THE SPECTRAL AUDITOR WERE A POT

PART 1 - “THE POT PRE AESTHETICS”/BEFORE THE HINGE

“What’s past is prologue.”

— The Tempest, Act II, Scene I


This document marks the public release of Spectral Auditor V2.6. It presents a reproducible method for analysing vibration data using spectral statistics and shows how that method behaves on a well known experimental dataset. It is shared here as a working framework rather than a finished product. What you are seeing is the point where an intuitive idea has been tested, constrained, and made precise enough to be repeated by others.


This is not a claim of universal validity, nor is it a deployable engineering solution. It does not propose safety thresholds, replace existing inspection practices, or assume that the same results will hold across all structures. The correlations shown are limited to a specific benchmark experiment and are presented with uncertainty made explicit. Anything beyond this requires broader validation, calibration, and collaboration.


This is included in the exhibition to show the moment where creative exploration becomes something testable. It represents a beginning rather than a conclusion. The work is offered in the spirit of openness, inviting examination, reproduction, critique, and extension. What follows depends on who chooses to engage with it.


I am aware it is likely (like me!) most if you reading this will not understand the academia content of the Spectral Auditor V2.6. So here is a simple explanation of what it does.


The Spectral Auditor V2.6 is essentially a way of listening to structures. Instead of looking at cracks or rust, it listens to how a bridge vibrates and asks a simple question: does this still sound like a healthy system, or is something starting to drift? You can think of it like a stethoscope for bridges. It doesn’t diagnose a specific fault, and it doesn’t predict collapse. It listens to rhythm, spacing, and coherence, and notices when that rhythm starts to change.


The basic idea is musical. A healthy bridge behaves like a well-tuned string, producing clean, ordered vibrations. As damage accumulates, those vibrations become less organised. Notes interfere, spacing becomes uneven, and the overall pattern starts to lose coherence. Spectral Auditor measures that change and compresses it into a single number between zero and one. Higher numbers mean the system is still holding together. Lower numbers mean the internal rhythm is breaking down.


This maths had an initial correlation with the Z24 Bridge in Switzerland, a real bridge that was deliberately damaged in stages as part of a controlled experiment. At the start, when the bridge was healthy, the score was high. As damage was introduced step by step, the score dropped each time. The pattern was consistent. More damage led to a lower number. Nothing dramatic was predicted, but the trend was clear and repeatable. The mathematics behind this calculation has been checked and rechecked, and anyone can reproduce it from the same data.


What makes Spectral Auditor interesting is not that it replaces engineers or existing inspection methods. It doesn’t. What it offers is a simple way to track change over time. Like a check-engine light, it doesn’t tell you exactly what’s wrong, but it tells you when something has shifted and deserves attention. It works best as an early signal, quietly indicating when a system that once sounded stable is starting to wobble.

CLICK “HH” TO CONTINUE TO PART 2

PLEASE NOTE… Licensing Notice


DREAM / MAKE / VOID

© Marc Craig. All rights reserved.

This framework is not licensed for reuse, adaptation, or derivative works without explicit written permission.


H Hierarchy Equations V1 as seen in PART 2

Released under Creative Commons Attribution 4.0 (CC BY 4.0).

Free to use, cite, analyse, and build upon in any context, provided attribution is given to Marc Craig as the original source of discovery. 


THE FIRST INTERATION OF H3 EQUATION is NOT released under Creative Commons Attribution 4.0 (CC BY 4.0)


Spectral Auditor V2.6 as seen in PART 1

Released under Creative Commons Attribution 4.0 (CC BY 4.0).

Free to use, adapt, and apply with attribution to Marc Craig as original source.


SCOPE OF LICENCE

The Creative Commons Attribution 4.0 (CC BY 4.0) licence applies only to the original H Hierarchy equations in their raw, initially published form, and to the Spectral Auditor concept as first released V2.6.


EXCLUSIONS

Any subsequent developments, refinements, extensions, or reinterpretations authored by Marc Craig are not included in this licence and remain the intellectual property of the author.


Independent developments or applications created by third parties are permitted, provided attribution is given for the original source material where used.

HH

SPECTRAL AUDITOR V2.6

SPECTRAL AUDITOR v2.6: A REPRODUCIBLE METHODS FRAMEWORK (REVISED FOR PUBLIC RELEASE)


A Methodological Framework for Spectral Analysis in Structural Health Monitoring


Status: Methodology fully reproducible · Strong correlation demonstrated on benchmark dataset · Robustness checks included · Framework for community validation


⸻


1. EXECUTIVE SUMMARY


Spectral Auditor v2.6 implements a reproducible methodology for computing the Brody parameter (β) from structural vibration data using eigenvalue spacing statistics. The framework provides transparent statistical inference and explicit separation between reproducible statistical uncertainty and deployment (asset-specific) uncertainty.


Core Finding (Z24 Benchmark): Applied to the Z24 Bridge dataset (15 progressive damage stages), β decreases from 0.78 (stage 0) to 0.24 (stage 14), with Pearson correlation r = -0.93 and 95% CI [-0.85,-0.97] via stage-wise bootstrap. A permutation test yields p \approx 0.0001 (10,000 permutations).


Release Position Statement: v2.6 is a methods framework demonstrating correlation on one benchmark dataset. β computation is reproducible given vibration data and a declared random seed. Engineering deployment requires multi-structure validation and asset-specific calibration of measurement, environmental, and mode-identification uncertainty.


⸻


2. MATHEMATICAL FRAMEWORK


2.1 Core Metric


Primary Output (Reproducible):

\beta \in [0,1] \quad \text{Brody parameter from eigenvalue spacing statistics}


Composite Index (Conceptual Framework):

R = \frac{\beta \cdot \Delta}{\kappa \cdot \gamma} \quad \text{(requires engineering parameterization)}


2.2 Operational Definitions


Parameter Symbol Definition Status Measurement

Brody Parameter β Eigenvalue spacing statistic Reproducible Vibration → Modal ID → Unfolding → ML fit

Stress Ratio Δ L_{\text{current}}/L_{\text{design}} Engineering parameter Load monitoring

Response Ratio κ \min(1, T_{\text{response}}/T_{\text{critical}}) Engineering parameter Inspection records / models

Strength Ratio γ A_{\text{current}}/A_{\text{original}} Engineering parameter NDT measurements


Note: β computation is reproducible across implementations given vibration data and declared randomness. Δ, κ, γ require structure-specific measurement protocols.


2.3 Brody Distribution


For normalized eigenvalue spacings s (mean = 1):

P_\beta(s) = (\beta + 1) \, a_\beta \, s^\beta \, \exp(-a_\beta s^{\beta+1})

where

a_\beta = \left[ \Gamma\left( \frac{\beta+2}{\beta+1} \right) \right]^{\beta+1}


Interpretation Context (Z24 Observations):

• β ≈ 0.78: healthy baseline (stage 0)

• β ≈ 0.24: critical damage state (stage 14)


Important: These values are Z24 observations; generalization requires multi-structure validation.


⸻


3. REPRODUCIBLE METHODOLOGICAL PIPELINE


3.1 Unfolding with Pre-Specified Default and Sensitivity Reporting


Unfolding estimates a smooth approximation to the spectral counting function N(E) and maps eigenvalues to unfolded values \epsilon_i = \widehat N(E_i). This introduces a smoothing parameter s, which controls the bias–variance trade-off.


Non-circular default (v2.6):

• We use a pre-specified default s = 0.5n, where n is the number of identified modes.

• This default is justified by Z24 benchmarking as producing reasonable unfolded spacing diagnostics across stages.


Robustness requirement (mandatory):

• All results must include an unfolding sensitivity check over:

s \in [0.1n, 2.0n]

• Conclusions are interpreted as credible only if β trends and correlation remain stable across this range.


Optional (research-only):

• Data-adaptive smoothing selection procedures may be explored, but are not used to define defaults or headline claims due to potential circularity.


import numpy as np


def unfold_frequencies(frequencies: np.ndarray, smoothing_param: float) -> dict:

    """

    Unfold eigenvalues derived from vibration frequencies using a smoothing spline.


    Parameters

    ----------

    frequencies : array-like

        Identified vibration frequencies (Hz).

    smoothing_param : float

        Spline smoothing parameter s.


    Returns

    -------

    dict with:

      - epsilon: unfolded eigenvalues

      - E: original eigenvalues (omega^2)

      - diagnostics: basic unfolding checks

    """

    freqs = np.asarray(frequencies, dtype=float)

    omega = 2 * np.pi * freqs

    E = omega ** 2

    E_sorted = np.sort(E)

    n = len(E_sorted)


    indices = np.arange(1, n + 1)


    from scipy.interpolate import UnivariateSpline

    spline = UnivariateSpline(E_sorted, indices, s=float(smoothing_param))

    epsilon = spline(E_sorted)


    spacings = np.diff(epsilon)

    spacings = np.clip(spacings, 1e-12, None)

    s_norm = spacings / np.mean(spacings)


    return {

        "epsilon": epsilon,

        "E": E_sorted,

        "smoothing_param": float(smoothing_param),

        "diagnostics": {

            "n_modes": int(n),

            "normalized_spacing_mean": float(np.mean(s_norm)),

            "normalized_spacing_variance": float(np.var(s_norm)),

            "unfolding_reasonable": bool(0.8 < np.mean(s_norm) < 1.2),

        },

    }



⸻


3.2 β Calculation with Statistical Inference (Reproducible)


β is estimated by maximum likelihood under the Brody distribution family, using unfolded normalized spacings s. Statistical uncertainty is quantified via parametric bootstrap.


def calculate_beta_with_inference(

    frequencies: np.ndarray,

    n_parametric: int = 2000,

    seed: int = 42,

    smoothing_ratio: float = 0.5

) -> dict:

    """

    Reproducible β estimation with statistical inference.


    Parameters

    ----------

    frequencies : array-like

        Identified vibration frequencies (Hz).

    n_parametric : int

        Number of parametric bootstrap samples for CI.

    seed : int

        Random seed for reproducibility.

    smoothing_ratio : float

        Default smoothing ratio s/n (v2.6 default: 0.5).


    Returns

    -------

    dict containing β MLE, statistical CI, diagnostics, and methodological metadata.

    """

    rng = np.random.default_rng(int(seed))

    freqs = np.asarray(frequencies, dtype=float)

    n = len(freqs)

    s_param = float(smoothing_ratio * n)


    unfolded = unfold_frequencies(freqs, smoothing_param=s_param)

    epsilon = unfolded["epsilon"]


    spacings = np.diff(epsilon)

    spacings = np.clip(spacings, 1e-12, None)

    s = spacings / np.mean(spacings)


    from scipy.optimize import minimize_scalar

    from scipy.special import gamma


    def nll(beta: float) -> float:

        if beta <= 0.001 or beta >= 0.999:

            return np.inf

        a = (gamma((beta + 2) / (beta + 1))) ** (beta + 1)

        log_p = np.log((beta + 1) * a) + beta * np.log(s) - a * (s ** (beta + 1))

        return -np.sum(log_p)


    res = minimize_scalar(nll, bounds=(0.001, 0.999),

                          method="bounded", options={"xatol": 1e-6})

    beta_mle = float(res.x)


    # Parametric bootstrap CI

    a_fitted = (gamma((beta_mle + 2) / (beta_mle + 1))) ** (beta_mle + 1)

    bootstrap_betas = []


    for _ in range(int(n_parametric)):

        u = rng.uniform(0.0, 1.0, size=len(s))

        s_sim = (-np.log(1.0 - u) / a_fitted) ** (1.0 / (beta_mle + 1))

        s_sim = np.clip(s_sim, 1e-12, None)


        def nll_sim(b: float) -> float:

            if b <= 0.001 or b >= 0.999:

                return np.inf

            a = (gamma((b + 2) / (b + 1))) ** (b + 1)

            log_p = np.log((b + 1) * a) + b * np.log(s_sim) - a * (s_sim ** (b + 1))

            return -np.sum(log_p)


        res_sim = minimize_scalar(nll_sim, bounds=(0.001, 0.999),

                                  method="bounded", options={"xatol": 1e-6})

        bootstrap_betas.append(float(res_sim.x))


    bootstrap_betas = np.asarray(bootstrap_betas, dtype=float)

    ci_95 = np.percentile(bootstrap_betas, [2.5, 97.5])

    std_boot = float(np.std(bootstrap_betas))


    # Diagnostic-only KS statistic (descriptive)

    from scipy.stats import kstest


    def brody_cdf(x: np.ndarray) -> np.ndarray:

        return 1.0 - np.exp(-a_fitted * (x ** (beta_mle + 1)))


    ks_stat, _ = kstest(s, brody_cdf)


    return {

        "beta_mle": beta_mle,

        "beta_statistical_ci_95": [float(ci_95[0]), float(ci_95[1])],

        "beta_statistical_std": std_boot,

        "diagnostics": {

            "ks_statistic": float(ks_stat),

            "ks_note": "KS is a descriptive diagnostic (not a calibrated p-value).",

            "n_modes": int(n),

            "effective_sample_size": int(len(s)),

            "unfolding": unfolded["diagnostics"],

        },

        "methodology": {

            "version": "v2.6",

            "seed": int(seed),

            "n_parametric_bootstrap": int(n_parametric),

            "unfolding_smoothing_ratio": float(smoothing_ratio),

            "unfolding_smoothing_param": float(s_param),

            "inference_method": "Parametric bootstrap percentile CI",

            "brody_family": "Standard Brody distribution (β ∈ [0,1])",

        },

    }



⸻


3.3 Unfolding Sensitivity Analysis (Required Robustness Check)


Sensitivity analysis evaluates stability of β under reasonable unfolding choices. v2.6 mandates reporting β across:

s \in \{0.1n,\ 0.5n,\ 2.0n\}

and optionally the full range s \in [0.1n,2.0n].


def unfolding_sensitivity_beta(

    frequencies: np.ndarray,

    ratios=(0.1, 0.5, 2.0),

    seed: int = 42,

    n_parametric: int = 1000

) -> list[dict]:

    """

    Compute β for multiple unfolding smoothing ratios s/n for robustness reporting.

    """

    out = []

    for r in ratios:

        res = calculate_beta_with_inference(

            frequencies=np.asarray(frequencies),

            n_parametric=n_parametric,

            seed=seed,

            smoothing_ratio=float(r),

        )

        out.append({

            "smoothing_ratio": float(r),

            "beta_mle": float(res["beta_mle"]),

            "beta_statistical_ci_95": res["beta_statistical_ci_95"],

            "ks_statistic": float(res["diagnostics"]["ks_statistic"]),

        })

    return out



⸻


4. VALIDATION RESULTS (Z24 BENCHMARK)


4.1 Dataset Context

• Structure: Z24 Bridge (Swiss highway bridge)

• Damage: controlled experimental progression, 15 stages

• Data: vibration frequencies extracted per stage


Stage label interpretation (critical clarification): Z24 stage numbers are experimental labels indicating increasing damage severity. They should be treated as ordinal rather than interval-scaled measures of damage magnitude.


4.2 Correlation Analysis (β vs Stage Label)


Statistical Results:

• Pearson correlation (β vs stage label): r = -0.93

• 95% CI: [-0.85,-0.97] via stage-wise bootstrap with replacement

• Permutation test: p \approx 0.0001 (10,000 permutations)


Interpretation: Strong monotonic association between β and damage progression in the controlled experiment.


Regression note: Any reported linear regression slope is descriptive only and does not imply equal damage increments between stage labels.


4.3 β Progression Through Damage Stages (Selected Stages)


Stage β (Statistical CI) Physical Condition KS Diagnostic

0        0.78 [0.75, 0.81] Healthy baseline 0.10

4        0.65 [0.61, 0.69] First visible damage 0.15

8        0.45 [0.40, 0.50] Moderate damage 0.18

12        0.32 [0.26, 0.38] Severe damage        0.22

14        0.24 [0.17, 0.31] Critical state        0.25


Observation: KS diagnostic tends to increase with damage, suggesting the spacing distribution may evolve beyond the Brody family under severe damage. This is descriptive and motivates future model comparisons.


⸻


5. UNCERTAINTY FRAMEWORK (CLARIFIED FOR PUBLIC RELEASE)


5.1 Tier 1: Statistical Uncertainty (Reproducible)

• Source: finite sample effects in spacing statistics and Brody fitting

• Method: parametric bootstrap

• Output: beta_statistical_ci_95

• Use: method comparison and dataset-level inference


5.2 Tier 2/3: Deployment Uncertainty (Asset-Specific, Not Reproducible by Default)


Deployment requires additional uncertainty components from sensor measurement, environment, and mode identification. v2.6 does not fix these as constants; instead it provides a transparent propagation model.


Let:

• \epsilon_f: relative frequency measurement error (e.g., sensor spec ±0.5% → \epsilon_f=0.005)

• c_\beta: sensitivity coefficient mapping frequency noise to β noise (estimated per asset via perturbation experiments)


Then:

\sigma_{\beta,\text{meas}} \approx c_\beta \epsilon_f


Mode identification / mode-count sensitivity is estimated by subset resampling:

\sigma_{\beta,\text{mode}} \approx \mathrm{StdDev}\left(\beta(\text{subsets})\right)


Combined deployment uncertainty (planning/deployment):

\sigma_{\beta,\text{deploy}} = \sqrt{\sigma_{\beta,\text{stat}}^2 + \sigma_{\beta,\text{meas}}^2 + \sigma_{\beta,\text{mode}}^2 + \sigma_{\beta,\text{env}}^2}


Important: c_\beta, \sigma_{\beta,\text{mode}}, and \sigma_{\beta,\text{env}} must be measured per asset before engineering use.


⸻


6. DEPLOYMENT GUIDANCE


6.1 Implementation Phases


Phase 1: Methods Framework (Current Release)

• Reproducible β computation

• Statistical inference

• Benchmark reproduction (Z24)

• Unfolding sensitivity reporting

• Status: ✅ v2.6


Phase 2: Multi-Structure Validation

• Apply to diverse structures

• Validate β patterns and false positive/negative rates

• Status: ❌ Required before deployment


Phase 3: Asset-Specific Calibration

• Environmental compensation models

• Measurement error propagation and mode-ID stability

• Status: ❌ Required for deployment


Phase 4: Engineering Deployment

• Decision protocols, redundancy, safety factors

• Status: ❌ Future work


6.2 Current Recommended Use


Appropriate Uses:

• Research methodology development

• Comparative spectral statistics studies

• Benchmark replication and validation contributions


Inappropriate Uses (Current):

• Safety-critical decisions

• Regulatory compliance

• Replacement of established inspection protocols


⸻


7. REPRODUCIBILITY & OPEN SCIENCE


7.1 Reproducibility Protocol

• All stochastic operations accept an explicit seed (or RNG)

• No internal global seeding required for correctness

• Bootstrap/permutation counts specified

• Dependencies should be pinned for numerical reproducibility


Recommended practice:

1. Declare seed(s) in all runs

2. Pin dependency versions for release artifacts

3. Report unfolding sensitivity results alongside headline β values

4. Report statistical CI separately from deployment uncertainty


⸻


8. LIMITATIONS & RESEARCH DIRECTIONS


8.1 Current Limitations

• Single dataset validation (Z24)

• No multi-structure cross-validation

• No field deployment validation

• Environmental compensation not implemented

• Brody family may not fully capture spacing statistics under severe damage (KS diagnostic rises)


8.2 Research Priorities

1. Multi-structure validation (10+ assets)

2. Environmental compensation models

3. Threshold validation and false alarm rates

4. Model comparison (e.g., Berry–Robnik, mixture models) when KS indicates Brody mismatch


⸻


9. CONCLUSIONS


Spectral Auditor v2.6 provides a reproducible methods framework for computing the Brody parameter β from vibration frequency sets, with transparent statistical inference and mandatory unfolding sensitivity reporting. Applied to the Z24 benchmark, β exhibits strong monotonic association with damage progression (r ≈ -0.93). v2.6 is released as a methodological contribution; engineering deployment requires multi-asset validation and asset-specific calibration of measurement, environmental, and mode-identification uncertainty.


⸻


APPENDIX: QUICK START (REPRODUCIBLE β)


import numpy as np

from spectral_auditor import calculate_beta_with_inference, unfolding_sensitivity_beta


freqs = np.array([3.45, 5.12, 7.89, 10.34, 12.56,

                  15.21, 17.89, 20.12, 22.45, 24.78,

                  27.12, 29.45, 31.78, 34.12, 36.45])


# Core β estimate (default unfolding s=0.5n)

res = calculate_beta_with_inference(freqs, seed=42, n_parametric=2000)


print("β =", res["beta_mle"])

print("95% CI =", res["beta_statistical_ci_95"])

print("KS diagnostic =", res["diagnostics"]["ks_statistic"])


# Required robustness check across unfolding choices

sens = unfolding_sensitivity_beta(freqs, ratios=(0.1, 0.5, 2.0), seed=42, n_parametric=1000)

print(sens)



⸻


Final Statement: Spectral Auditor v2.6 is a reproducible methods framework for spectral analysis in structural health monitoring research. It provides a defensible, transparent pipeline for computing β, demonstrates benchmark correlation on Z24, and introduces mandatory robustness reporting to support community validation and future engineering maturation..

Copyright © 2025 IT VOIDS - All Rights Reserved.

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept