我有一个SQL Server数据库,我正在其中以拼花地板格式迁移到AWS S3,以构建一个数据湖。我正在使用Apache Airflow使用DAGS自动执行此任务。在这种情况下,模式上的每个表都变成了一个.plack文件,这使S3成为一个数据湖,从而能够事后使用AWS Athena和/或在ElasticSearch中进一步索引。
有一些非常大的表,显然,我希望这些表的迁移任务需要更多的时间。对于python,我发现唯一与Microsoft SQL Server连接的库是pyodbc,它是由Microsoft官方开发和维护的。
对于如此大的表(约6000万个寄存器),使用
cursor.fetchall()
由于任务似乎被Airlfow的SIGNALKILL杀死,因此花费了太长时间并导致错误。
Bellow是DAG有多大的一个例子(只是其中的一部分):
为了获取给定架构中的所有表,我使用了以下SQL Server查询:
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_CATALOG='{}';
在括号中,我使用Python中的.format()函数插入模式名称并检索表,以动态构建DAG结构。我已经更改了python代码,以批量获取如此大的表中的数据,以尽量减少任何潜在的数据溢出:
def stream(cursor, batch_size=50000):
while True:
row = cursor.fetchmany(batch_size)
if row is None or not row:
break
yield row
def fetch_data(query, schema, filename, remote_path, save_locally=False):
cnxn = pyodbc.connect(driver='Here I Put the ODBC Driver Name',
host='Host for de SQL Server DB',
database='Nameof the DB Schema',
user='User for Auth in the DB',
password='Pass for Auth in the DB')
print('Connetciton stabilished with {} ..'.format(schema))
cursor = cnxn.cursor()
print('Initializing cursor ...')
print('Requestin query {} ..'.format(query))
cursor.execute(query)
print('Query fetched for {} ..'.format(schema))
row_batch = stream(cursor)
print('Getting Iterator ...')
cols = cursor.description
cols = [col[0] for col in cols]
print('Creating batch data_frame ..')
data_frame = pd.DataFrame(columns=cols)
start_time = time.time()
for rows in row_batch:
batch_df = pd.DataFrame.from_records(rows, columns=cols)
data_frame = data_frame.append(batch_df, ignore_index=True)
batch_df = None
print("-- Batch inserted in %s seconds --" % (time.time() - start_time))
start_time = time.time()
cnxn.close()
print('Connetciton closed ..')
// other code to convert to .parquet and send to S3
save_to_bucket(data_frame, remote_path)
return 'FETCHING DATA'
这个策略似乎对模式中96%的整个表都很有效,正如我之前所说,当表非常大,大约有6000万条记录时,任务会运行一段时间,大约30分钟,但通常在那之后,Airflow会终止任务,就像那样。没有连接错误,也没有python异常或什么都没有。在调度器的终端中显示的唯一内容是:
[2021-04-17 23:03:59,719] {scheduler_job.py:1199} INFO - Executor reports execution of ORTOCLIN_TO_S3.FETCHING_HISTORICORESUMO_DATA execution_date=2021-04-17 20:00:17.426578+00:00 exited with status success for try_number 1
[2021-04-17 23:05:02,050] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
[2021-04-17 23:10:02,314] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
[2021-04-17 23:15:02,666] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
[2021-04-17 23:20:03,226] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
[2021-04-17 23:25:03,868] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
[2021-04-17 23:30:04,346] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
[2021-04-17 23:35:04,853] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
[2021-04-17 23:40:05,324] {scheduler_job.py:1834} INFO - Resetting orphaned tasks for active dag runs
在任务的气流日志中,我们只需:
有什么解决办法吗?请帮帮我!