Oracle · T-SQL · Teradata · DB2 · Hive · Netezza · 15+ Dialects

Migrate any SQL dialect
to any target

500+ function-to-function mappings, any-to-any SQL transpilation, stored procedure migration, and column-level lineage — powered by custom parsers built for each dialect.

Snowflake BigQuery Databricks Synapse Redshift dbt
SQL Transpilation — Live Examples
Oracle Snowflake
Teradata BigQuery
T-SQL Databricks
Hive HQL Databricks
DB2 Snowflake
Netezza Redshift

15+ source dialects → 7+ cloud targets

15+
Source SQL Dialects
Oracle, T-SQL, Teradata, DB2 & more
500+
Function Mappings
Date, string, NULL, aggregate
7+
Cloud Targets
Snowflake, BigQuery, Databricks & more
100%
Custom Parsers
Per-dialect, not generic ANTLR
Source Dialects

Every legacy SQL dialect, fully understood

MigryX builds a dedicated parser per dialect — not a lowest-common-denominator grammar. Every dialect-specific construct is parsed, classified, and migrated with full fidelity.

Oracle Database
Oracle SQL / PL/SQL
  • DECODE, NVL, NVL2, NULLIF
  • ROWNUM, ROWID pseudo-columns
  • CONNECT BY hierarchical queries
  • MODEL clause (spreadsheet model)
  • PIVOT / UNPIVOT
  • Analytical (window) functions
  • Date arithmetic & interval types
  • DUAL table elimination
  • MERGE (upsert)
  • FORALL / BULK COLLECT (PL/SQL)
→ Migrate Oracle SQL
Microsoft SQL Server
SQL Server T-SQL
  • TOP N / TOP N PERCENT
  • ISNULL, COALESCE, IIF
  • GETDATE(), CONVERT / CAST
  • CTE (WITH) and recursive CTEs
  • PIVOT / UNPIVOT
  • TRY / CATCH error handling
  • MERGE (upsert)
  • Table variables (@var) & temp tables (#tbl)
  • STRING_AGG, OPENJSON, FOR XML
  • Dynamic SQL (EXEC sp_executesql)
→ Migrate T-SQL
Teradata
Teradata SQL
  • QUALIFY (window filter clause)
  • RESET WHEN partitioning
  • NORMALIZE (period normalization)
  • EXPAND ON temporal expansion
  • WITH BY grouping extension
  • SAMPLE clause
  • HashRow / HashAmp functions
  • FORMAT / TITLE column attributes
  • PRIMARY INDEX (distribution)
  • Temporal tables & BTEQ scripting
→ Migrate Teradata SQL
IBM
IBM DB2
  • FETCH FIRST n ROWS ONLY
  • VALUES INTO assignment
  • MERGE statement
  • SIGNAL / RESIGNAL (error handling)
  • CTE (WITH) expressions
  • XML functions, XMLTABLE
  • WITH UR / CS / RS / RR isolation levels
  • Label-based access control (LBAC)
  • Special registers (CURRENT DATE, etc.)
  • GENERATED ALWAYS / BY DEFAULT
→ Migrate DB2
IBM / Pure Data Systems
Netezza (NPS)
  • DISTRIBUTE ON distribution keys
  • Zone map metadata
  • pg_catalog system tables
  • NPS-specific built-in functions
  • SKEWNESS diagnostic functions
  • Netezza SQL extensions
  • Data slice architecture awareness
  • NZLOAD / nzsql scripting
  • Row-level security extensions
  • Column store tables
→ Migrate Netezza
VMware / Broadcom
Greenplum
  • DISTRIBUTED BY (distribution key)
  • External tables (gpfdist)
  • COPY command
  • Partition pruning strategies
  • Greenplum analytics extensions
  • gp_toolkit system catalog
  • Append-optimized tables (AO/CO)
  • Resource queue / group management
  • gphdfs external table integration
  • MADlib ML function calls
→ Migrate Greenplum
Micro Focus / Vertica
Vertica
  • PROJECTIONS (pre-computed views)
  • SEGMENTED BY hash expression
  • PARTITION PRUNE
  • APPROXIMATE_COUNT_DISTINCT
  • INTERPOLATE for time series
  • FlexTable semi-structured data
  • COPY FROM STDIN / COPY LOCAL
  • EXPORT TO PARQUET / ORC
  • Vertica-specific UDFs (C++ extension)
  • Live aggregate projections
→ Migrate Vertica
Apache / Hadoop Ecosystem
Hive HQL
  • MAP / REDUCE execution hints
  • LATERAL VIEW EXPLODE
  • COLLECT_SET / COLLECT_LIST
  • ARRAY, MAP, STRUCT complex types
  • SERDE / storage format directives
  • External tables (LOCATION)
  • ORC / Parquet / Avro storage
  • DISTRIBUTE BY / SORT BY / CLUSTER BY
  • Dynamic partitioning
  • Hive UDF / GenericUDF Java classes
→ Migrate Hive HQL
PostgreSQL Global Development Group
PostgreSQL (as source)
  • DO blocks (PL/pgSQL)
  • Arrays & array operators
  • JSONB operators & functions
  • Range types (tsrange, daterange)
  • Table inheritance (INHERITS)
  • Foreign Data Wrappers (FDW)
  • CREATE EXTENSION
  • Hypothetical-set window functions
  • LATERAL joins
  • Materialized views & REFRESH
→ Migrate PostgreSQL
Parser Engine

Custom parsers, not generic grammar

MigryX builds a dedicated parser for each source dialect. This eliminates the parse failures common in generic ANTLR-based tools and enables accurate AST construction for complex stored procedures, CTEs, and dialect-specific syntax.

-- Oracle source: hierarchical query with CONNECT BY
SELECT employee_id, manager_id, LEVEL, name
  FROM employees
  START WITH manager_id IS NULL
  CONNECT BY PRIOR employee_id = manager_id;

-- MigryX output: Snowflake recursive CTE (semantic equivalent)
WITH RECURSIVE emp_hier AS (
  SELECT employee_id, manager_id, 1 AS lvl, name
  FROM employees WHERE manager_id IS NULL
  UNION ALL
  SELECT e.employee_id, e.manager_id, h.lvl+1, e.name
  FROM employees e JOIN emp_hier h ON e.manager_id = h.employee_id
)
SELECT * FROM emp_hier;

▶ Oracle CONNECT BY Snowflake RECURSIVE CTE (verified equivalent)
Capabilities

Complete SQL migration coverage

Every layer of SQL migration is handled — from DDL and DML through stored procedures, views, UDFs, and schema dependency ordering.

  • Custom parser per dialect — not ANTLR generic grammar
  • 500+ function-to-function mappings (date, string, NULL, aggregate)
  • Stored procedure & UDF migration with business logic extraction
  • QUALIFY / ROWNUM / TOP N normalization to window functions / LIMIT
  • CONNECT BY → recursive CTE rewrite
  • Date / time function mapping across all dialects
  • NULL handling normalization (NVL, ISNULL, COALESCE unification)
  • MERGE / UPSERT pattern translation to target dialect
  • Partitioning strategy advisory for target platforms
  • DDL migration — data type mapping, index & statistics DDL
  • Schema object dependency ordering (tables before views before procs)
  • CTE and recursive CTE support across all source and target dialects
  • View and materialized view migration
  • CASE / DECODE / IIF unification to standard CASE expression
  • Window function normalization across all dialects
Target Platforms
Snowflake
BigQuery
Databricks
Azure Synapse
AWS Redshift
PostgreSQL
dbt
Snowpark Python
UDTF / UDF Conversion

Dialect-specific scalar and table-valued functions are resolved to target equivalents. Where no native equivalent exists, MigryX generates a Python / JavaScript UDF scaffold for the target platform.

Column-Level Lineage

Every migrated SQL object carries forward column-level lineage metadata — source table, source column, transformation expression — surfaced as STTM / data catalog exports.

Function Mapping Library

500+ dialect function translations

MigryX maintains a curated, tested mapping library covering date/time functions, string functions, NULL coalescing, aggregation, type conversion, and dialect-specific syntax rewrites.

Source Dialect Source Function / Syntax Target Dialect Target Equivalent Category
OracleNVL(col, 0)Snowflake / BigQuery / DatabricksCOALESCE(col, 0)NULL handling
OracleDECODE(x, a, b, c)All targetsCASE WHEN x=a THEN b ELSE c ENDConditional
OracleROWNUM <= NSnowflake / BigQueryROW_NUMBER() OVER(...) <= N / LIMIT NRow limiting
OracleSYSDATEAll targetsCURRENT_TIMESTAMPDate/time
OracleADD_MONTHS(dt, n)SnowflakeDATEADD(month, n, dt)Date arithmetic
OracleTRUNC(dt, 'MONTH')BigQueryDATE_TRUNC(dt, MONTH)Date truncation
OracleCONNECT BY PRIORSnowflake / BigQueryWITH RECURSIVE ... UNION ALLHierarchical
T-SQLISNULL(col, 0)All targetsCOALESCE(col, 0)NULL handling
T-SQLGETDATE()All targetsCURRENT_TIMESTAMPDate/time
T-SQLTOP NSnowflake / BigQuery / DatabricksLIMIT NRow limiting
T-SQLCONVERT(VARCHAR, dt, 101)SnowflakeTO_VARCHAR(dt, 'MM/DD/YYYY')Type conversion
T-SQLDATEDIFF(day, d1, d2)BigQueryDATE_DIFF(d2, d1, DAY)Date arithmetic
TeradataQUALIFY ROW_NUMBER()=1Snowflake (native)QUALIFY ROW_NUMBER()=1Window filter
TeradataQUALIFY ROW_NUMBER()=1BigQuery / DatabricksSubquery with CTE wrappingWindow filter
DB2FETCH FIRST 10 ROWS ONLYAll targetsLIMIT 10Row limiting
DB2CURRENT DATEAll targetsCURRENT_DATEDate/time
HiveLATERAL VIEW EXPLODE(arr)SnowflakeLATERAL FLATTEN(arr)Array expansion
HiveLATERAL VIEW EXPLODE(arr)BigQueryUNNEST(arr)Array expansion
HiveCOLLECT_SET(col)SnowflakeARRAY_AGG(DISTINCT col)Aggregation
NetezzaAGE_IN_YEARS(dt)SnowflakeDATEDIFF('year', dt, CURRENT_DATE)Date arithmetic
VerticaAPPROXIMATE_COUNT_DISTINCT(col)SnowflakeAPPROX_COUNT_DISTINCT(col)Approximation
PostgreSQLcol::INT (cast shorthand)Snowflake / BigQueryCAST(col AS INT)Type conversion

Showing 22 of 500+ mappings. Full mapping library available in the MigryX assessment report.

Methodology

Three phases, zero ambiguity

MigryX follows a structured migration methodology that covers discovery through validation — with every decision auditable and every migration fully documented.

1

Analyze

Inventory all SQL artifacts and build a complete dependency graph before a single line is rewritten.

  • Parse all SQL artifacts (queries, views, procs, UDFs, DDL)
  • Classify dialect version and dialect-specific patterns
  • Detect dialect-specific constructs (CONNECT BY, QUALIFY, etc.)
  • Build object dependency graph (execution order)
  • Identify complexity tiers and migration risk
  • Produce source inventory with line counts and object types
2

Convert

Apply the mapping library to rewrite every SQL object to target-dialect equivalent syntax.

  • Apply 500+ function mapping library
  • Rewrite dialect-specific syntax constructs
  • Normalize to target dialect idioms
  • Generate equivalent DDL (data types, indexes)
  • Emit dbt models / macros where applicable
  • Produce column-level lineage for every migrated object
3

Validate

Confirm semantic equivalence — not just syntactic validity — before migration is signed off.

  • Execute query parity testing (row count, hash)
  • Result set comparison on sample datasets
  • EXPLAIN plan analysis on target platform
  • NULL handling edge-case tests
  • Date arithmetic boundary tests
  • Produce validation report for each migrated object
Supported Platforms

From any legacy source to any modern target

Source Dialects
Oracle SQL / PL/SQL
SQL Server T-SQL
Teradata SQL
IBM DB2
Netezza NPS
Greenplum
Vertica
Hive HQL
PostgreSQL
Target Platforms
Snowflake
BigQuery
Databricks
Azure Synapse / Fabric
AWS Redshift
dbt
PostgreSQL (modern)
Deployment

On-premise, air-gapped, or cloud

MigryX runs wherever your data lives. Enterprise clients with restricted environments can deploy fully air-gapped with no outbound internet access required.

🔒

On-Premise / Air-Gapped

Full MigryX platform deployed inside your network perimeter. No source code, SQL, or metadata leaves your environment. Suitable for regulated industries (FSI, Healthcare, Government).

Cloud / SaaS

Browser-based MigryX platform with SOC 2 controls. Tenant-isolated processing. Connect your source environment via secure agent or upload SQL artifact packages directly.

SQL Migration Pilot

Start with a free SQL dialect assessment

We analyze your SQL inventory, classify dialect patterns, count object types, and produce a migration readiness report — before any contract is signed.

  • Complete SQL artifact scan (queries, views, procedures, DDL)
  • Dialect classification and version detection
  • Complexity tier scoring per object
  • Function coverage report (which functions we map)
  • Estimated migration timeline and effort breakdown
  • Sample output: 5 migrated SQL objects to your target dialect
Assessment
Free
No commitment required
Schedule Now
Enterprise SQL Migration pilot engagements from $25K
Contact

Talk to a SQL migration specialist

Request SQL Migration Assessment

Why MigryX for SQL migration?

MigryX is the only platform with dedicated per-dialect parsers built by SQL migration engineers — not generic grammar tools. Our clients migrate 95%+ of SQL objects automatically.

  • 📍 Dedicated SQL migration engineers
  • 🔒 On-premise / air-gapped deployment available
  • ✅ Query parity validation included
  • 📊 Column-level lineage for every migrated object

Prefer to schedule directly?

Book a 30-minute SQL migration discovery call with our engineering team.

Schedule on Calendly

Migrate your SQL — any dialect, any target

MigryX migrates SQL across 15+ dialects — Oracle, T-SQL, Teradata, DB2, Hive, Netezza, and more — to Snowflake, BigQuery, Databricks, and modern cloud targets with 500+ function mappings and column-level lineage.