fix(admissions): switch to EES content API + correct publication slug and columns
All checks were successful
Build and Push Docker Images / Build Backend (FastAPI) (push) Successful in 50s
Build and Push Docker Images / Build Frontend (Next.js) (push) Successful in 1m12s
Build and Push Docker Images / Build Integrator (push) Successful in 57s
Build and Push Docker Images / Build Kestra Init (push) Successful in 33s
Build and Push Docker Images / Trigger Portainer Update (push) Successful in 1s
All checks were successful
Build and Push Docker Images / Build Backend (FastAPI) (push) Successful in 50s
Build and Push Docker Images / Build Frontend (Next.js) (push) Successful in 1m12s
Build and Push Docker Images / Build Integrator (push) Successful in 57s
Build and Push Docker Images / Build Kestra Init (push) Successful in 33s
Build and Push Docker Images / Trigger Portainer Update (push) Successful in 1s
The EES statistics API only exposes ~13 publications; admissions data is not among them. Switch to the EES content API (content.explore-education-statistics. service.gov.uk) which covers all publications. - ees.py: add get_content_release_id() and download_release_zip_csv() that fetch the release ZIP and extract a named CSV member from it - admissions.py: use corrected slug (primary-and-secondary-school-applications- and-offers), correct column names from actual CSV (school_urn, total_number_places_offered, times_put_as_1st_preference, etc.), derive first_preference_offers_pct from offer/application ratio, filter to primary schools only, keep most recent year per URN Also includes SchoolDetailView UX redesign: parent-first section ordering, plain-English labels, national average benchmarks, progress score colour coding, expanded header, quick summary strip, and CSS consolidation. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -429,10 +429,17 @@ def run_full_migration(geocode: bool = False) -> bool:
|
||||
except Exception as e:
|
||||
print(f" Warning: could not save geocode cache: {e}")
|
||||
|
||||
print("Dropping existing tables...")
|
||||
Base.metadata.drop_all(bind=engine)
|
||||
# Only drop the core KS2 tables — leave supplementary tables (ofsted, census,
|
||||
# finance, etc.) intact so a reimport doesn't wipe integrator-populated data.
|
||||
ks2_tables = ["school_results", "schools", "schema_version"]
|
||||
print(f"Dropping core tables: {ks2_tables} ...")
|
||||
inspector = __import__("sqlalchemy").inspect(engine)
|
||||
existing = set(inspector.get_table_names())
|
||||
for tname in ks2_tables:
|
||||
if tname in existing:
|
||||
Base.metadata.tables[tname].drop(bind=engine)
|
||||
|
||||
print("Creating tables...")
|
||||
print("Creating all tables...")
|
||||
Base.metadata.create_all(bind=engine)
|
||||
|
||||
print("\nLoading CSV data...")
|
||||
|
||||
Reference in New Issue
Block a user