大家好 用 sqlldr 执行后 插入数据一部分数据后 出错了
From [email protected] Wed Apr 25 04:03:37 2012
Return-Path: <[email protected]>
Received: from dbbak.localdomain (localhost.localdomain [127.0.0.1])
by dbbak.localdomain (8.13.8/8.13.8) with ESMTP id q3P82GlZ005471
for <[email protected]>; Wed, 25 Apr 2012 04:03:37 -0400
Received: (from root@localhost)
by dbbak.localdomain (8.13.8/8.13.8/Submit) id q3P82D06005328;
Wed, 25 Apr 2012 04:02:13 -0400
Date: Wed, 25 Apr 2012 04:02:13 -0400
Message-Id: <[email protected]>
To: [email protected]
From: [email protected]
Subject: Logwatch for dbbak (Linux)
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="iso-8859-1"
################### Logwatch 7.3 (03/24/06) ####################
Processing Initiated: Wed Apr 25 04:02:12 2012
Date Range Processed: yesterday
( 2012-Apr-24 )
Period is day.
Detail Level of Output: 0
Type of Output: unformatted
Logfiles for Host: dbbak
##################################################################
--------------------- pam_unix Begin ------------------------ su:
Sessions Opened:
root(uid=0) -> oracle: 1 Time(s)
su-l:
Unknown Entries:
session closed for user oracle: 3 Time(s)
session opened for user oracle by root(uid=0): 1 Time(s)
---------------------- pam_unix End -------------------------
--------------------- samba Begin ------------------------
**Unmatched Entries**
lib/util_sock.c:get_peer_addr(1232) getpeername failed. Error was ???????? : 25 Time(s)
lib/util_sock.c:read_data(540) read_data: read failure for 4 bytes to client 192.168.1.222. Error = ???????? : 1 Time(s)
---------------------- samba End -------------------------
--------------------- Disk Space Begin ------------------------ Filesystem Size Used Avail Use% Mounted on
/dev/sda3 20G 7.0G 12G 38% /
/dev/mapper/dbfiles-dbsys
689G 291G 364G 45% /opt
/dev/sda2 20G 12G 8.5G 58% /mnt/fat
---------------------- Disk Space End -------------------------
###################### Logwatch End #########################
不知道问题出在那 还有就是网上一直说 并发操作
sqlldr userid=/ control=result1.ctl direct=true parallel=true
sqlldr userid=/ control=result2.ctl direct=true parallel=true
sqlldr userid=/ control=result2.ctl direct=true parallel=true
当加载大量数据时(大约超过10GB),最好抑制日志的产生:
SQL>ALTER TABLE RESULTXT nologging;
这样不产生REDO LOG,可以提高效率。然后在CONTROL文件中load data上面加一行:unrecoverable 此选项必须要与DIRECT共同应用。
在并发操作时,ORACLE声称可以达到每小时处理100GB数据的能力!其实,估计能到1-10G就算不错了,开始可用结构 相同的文件,但只有少量数据,成功后开始加载大量数据,这样可以避免时间的浪费。
有关SQLLDR的问题 想知道 并发 到低是怎么一回事 是对同一个文件 操作 还是多个文件 并行操作 那些参数 有用
我运行了 12个小时 插入了 几百万数据 有点慢 需要在那里设置那些参数 ,能插入快点 表上有索引
From [email protected] Wed Apr 25 04:03:37 2012
Return-Path: <[email protected]>
Received: from dbbak.localdomain (localhost.localdomain [127.0.0.1])
by dbbak.localdomain (8.13.8/8.13.8) with ESMTP id q3P82GlZ005471
for <[email protected]>; Wed, 25 Apr 2012 04:03:37 -0400
Received: (from root@localhost)
by dbbak.localdomain (8.13.8/8.13.8/Submit) id q3P82D06005328;
Wed, 25 Apr 2012 04:02:13 -0400
Date: Wed, 25 Apr 2012 04:02:13 -0400
Message-Id: <[email protected]>
To: [email protected]
From: [email protected]
Subject: Logwatch for dbbak (Linux)
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="iso-8859-1"
################### Logwatch 7.3 (03/24/06) ####################
Processing Initiated: Wed Apr 25 04:02:12 2012
Date Range Processed: yesterday
( 2012-Apr-24 )
Period is day.
Detail Level of Output: 0
Type of Output: unformatted
Logfiles for Host: dbbak
##################################################################
--------------------- pam_unix Begin ------------------------ su:
Sessions Opened:
root(uid=0) -> oracle: 1 Time(s)
su-l:
Unknown Entries:
session closed for user oracle: 3 Time(s)
session opened for user oracle by root(uid=0): 1 Time(s)
---------------------- pam_unix End -------------------------
--------------------- samba Begin ------------------------
**Unmatched Entries**
lib/util_sock.c:get_peer_addr(1232) getpeername failed. Error was ???????? : 25 Time(s)
lib/util_sock.c:read_data(540) read_data: read failure for 4 bytes to client 192.168.1.222. Error = ???????? : 1 Time(s)
---------------------- samba End -------------------------
--------------------- Disk Space Begin ------------------------ Filesystem Size Used Avail Use% Mounted on
/dev/sda3 20G 7.0G 12G 38% /
/dev/mapper/dbfiles-dbsys
689G 291G 364G 45% /opt
/dev/sda2 20G 12G 8.5G 58% /mnt/fat
---------------------- Disk Space End -------------------------
###################### Logwatch End #########################
不知道问题出在那 还有就是网上一直说 并发操作
sqlldr userid=/ control=result1.ctl direct=true parallel=true
sqlldr userid=/ control=result2.ctl direct=true parallel=true
sqlldr userid=/ control=result2.ctl direct=true parallel=true
当加载大量数据时(大约超过10GB),最好抑制日志的产生:
SQL>ALTER TABLE RESULTXT nologging;
这样不产生REDO LOG,可以提高效率。然后在CONTROL文件中load data上面加一行:unrecoverable 此选项必须要与DIRECT共同应用。
在并发操作时,ORACLE声称可以达到每小时处理100GB数据的能力!其实,估计能到1-10G就算不错了,开始可用结构 相同的文件,但只有少量数据,成功后开始加载大量数据,这样可以避免时间的浪费。
有关SQLLDR的问题 想知道 并发 到低是怎么一回事 是对同一个文件 操作 还是多个文件 并行操作 那些参数 有用
我运行了 12个小时 插入了 几百万数据 有点慢 需要在那里设置那些参数 ,能插入快点 表上有索引
Column not found before end of logical record (use TRAILING NULLCOLS)
另外 一运行 sqlldr 就会出现这样的错
from (select t2.onenum,
t2.allnum,
t2.printstatus,
t.apfd_code,
t2.gidl_doc_numb,
t2.orgid,
t.bdgt_subj,
t3.subjectcode,
t3.subjectname,
case
when t.acst_id not in
(887209, 2001, 3001, 4001, 5001, 7001, 8001, 6066768) then
decode(t2.fund_char,
'0',
'预算内',
'1',
'其他财政性资金',
'2',
'其他',
'3',
'预算内(往年)',
'4',
'其他财政性资金(往年)',
'5',
'其他(往年)')
else
'其他'
end as fund_char,
t2.pay_amnt,
t2.dtal_id,
t2.item,
t4.name,
t.gath_org_name,
t2.smry smry,
t.acst_id
from ga_t_apfd_bill t,
sysmgr_org t1,
ga_t_apfd_bill_dtal t2,
busmgr_funnew t3,
sysmgr_user t4
where 1 = 1
and t.gath_org_id = t1.orgid(+)
and t.bdgt_subj = t3.subjectcode(+)
and t.apfd_user_id = t4.userid(+)
and t2.apfd_bill_id = t.apfd_bill_id
and t.year = 2012
and t3.year = 2012
and t.ACST_ID = 1001
and t.is_Del = 0
and t.status in ('4', '5')
and t2.gidl_doc_numb like % '414%'
order by t1.orgcode, t.bdgt_subj, to_number(t.APFD_CODE))这段哪儿错了??