You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i did set up replication B/W mysql and postgresql using pg_chameleon.
my replication going well and running state.but its provide problem in data syn.
Actually only table schema is replicated not data for that .pg_chameleon only DDL not DML from mysql .
i debug using
chameleon start_replica --config default --source mysql --debug
((ph_new) ist@ist:~/.pg_chameleon/configuration$ chameleon show_status --source mysql --debug
2020-09-06 20:13:24 MainProcess DEBUG pg_lib.py (659): Changing the autocommit flag to True
2020-09-06 20:13:24 MainProcess DEBUG pg_lib.py (659): Changing the autocommit flag to True
2020-09-06 20:13:24 MainProcess DEBUG pg_lib.py (659): Changing the autocommit flag to True
Source id Source name Type Status Consistent Read lag Last read Replay lag Last replay
Need help
i did set up replication B/W mysql and postgresql using pg_chameleon.
my replication going well and running state.but its provide problem in data syn.
Actually only table schema is replicated not data for that .pg_chameleon only DDL not DML from mysql .
i debug using
chameleon start_replica --config default --source mysql --debug
((ph_new) ist@ist:~/.pg_chameleon/configuration$ chameleon show_status --source mysql --debug
2020-09-06 20:13:24 MainProcess DEBUG pg_lib.py (659): Changing the autocommit flag to True
2020-09-06 20:13:24 MainProcess DEBUG pg_lib.py (659): Changing the autocommit flag to True
2020-09-06 20:13:24 MainProcess DEBUG pg_lib.py (659): Changing the autocommit flag to True
Source id Source name Type Status Consistent Read lag Last read Replay lag Last replay
== Schema mappings ==
Origin schema Destination schema
ph_p shc_sitef
== Replica status ==
Tables not replicated 0
Tables replicated 0
All tables 0
Last maintenance N/A
Next maintenance N/A
Replayed rows
Replayed DDL
Skipped rows
--------[IN MYSQL ]
mysql> create table emp (id int PRIMARY KEY, first_name varchar(20), last_name varchar(20));
Query OK, 0 rows affected (0.42 sec)
mysql> INSERT INTO emp VALUES (1,'avinash','vallarapu');
Query OK, 1 row affected (0.06 sec)
-------[IN POSTGRESQL]
new_ph=# \dt
List of relations
Schema | Name | Type | Owner
-----------+------+-------+--------
shc_sitef | emp | table | new_ph
(1 row)
new_ph=#
new_ph=# select * from emp;
id | first_name | last_name
----+------------+-----------
(0 rows)
(ph_new) ist@ist:~/.pg_chameleon/configuration$ chameleon --version;
pg-chameleon 2.0.14
MYSQL version
mysql> select version()P;
+--------+
| P |
+--------+
| 8.0.21 |
+--------+
1 row in set (0.00 sec)
PostgreSQL VERSION
new_ph=# select version();
version
PostgreSQL 11.8 (Ubuntu 11.8-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit
(1 row)
my default.yml
pid_dir: '
/.pg_chameleon/pid/'/.pg_chameleon/logs/'log_dir: '
log_dest: file
log_level: info
log_days_keep: 10
rollbar_key: ''
rollbar_env: ''
type_override allows the user to override the default type conversion
into a different one.
type_override:
"tinyint(1)":
override_to: boolean
override_tables:
- "*"
postgres destination connection
pg_conn:
host: "localhost"
port: "5432"
user: "new_ph"
password: "new_ph@123"
database: "new_ph"
charset: "utf8"
sources:
mysql:
db_conn:
host: "localhost"
port: "1122"
user: "pg_replica"
password: "pg_replica@123"
charset: 'utf8'
connect_timeout: 10
schema_mappings:
ph_p : shc_sitef
limit_tables:
#- delphis_mediterranea.foo
skip_tables:
#- delphis_mediterranea.bar
grant_select_to:
- usr_readonly
lock_timeout: "120s"
my_server_id: 100
replica_batch_size: 10000
replay_max_rows: 10000
batch_retention: '1 day'
copy_max_memory: "300M"
copy_mode: 'file'
out_dir: /tmp
sleep_loop: 1
on_error_replay: continue
on_error_read: continue
auto_maintenance: "disabled"
gtid_enable: false
type: mysql
anyone have solution for that..please share with me thank you in advance
The text was updated successfully, but these errors were encountered: