客户要求把100M+加的附件存入数据库(否定了文件存磁盘,数据库存路径的方法),因为性能问题,决定采用SqlServer2008的FileStream特性,经过测试发现用了FileStream以后,存大文件的速度加快了一倍。现在问题是,不管是否开启FileStream特性,往数据库中插入大文件时,SqlServer的内存都快速上涨,貌似内存没用完之前,文件多大,内存就长多大。设置了最大内存后,内存增长大一定程度就不再增长,但此时数据库响应很慢,请问有遇到同样问题的么?该如何解决?
调试欢乐多
using (SqlConnection connection = new SqlConnection(
connStringBuilder.ToString()))
{
connection.Open(); SqlCommand command = new SqlCommand("", connection);
command.CommandText = "select Top(1) Photo.PathName(), "
+ "GET_FILESTREAM_TRANSACTION_CONTEXT () from employees"; SqlTransaction tran = connection.BeginTransaction(
System.Data.IsolationLevel.ReadCommitted);
command.Transaction = tran; using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
// Get the pointer for file
string path = reader.GetString(0);
byte[] transactionContext = reader.GetSqlBytes(1).Buffer; FileStream fileStream = new SqlFileStream(path,
(byte[])reader.GetValue(1),
FileAccess.ReadWrite,
FileOptions.SequentialScan, 0); // Seek to the end of the file
fs.Seek(0, SeekOrigin.End); // Append a single byte
fileStream.WriteByte(0x01);
fileStream.Close();
}
}
tran.Commit();
}
问题找到,谢谢大家!
ON
PRIMARY ( NAME = Arch1,
FILENAME = 'c:\data\archdat1.mdf'),
FILEGROUP FileStreamGroup1 CONTAINS FILESTREAM( NAME = Arch3,
FILENAME = 'c:\data\filestream1')
LOG ON ( NAME = Archlog1,
FILENAME = 'c:\data\archlog1.ldf')
GOCREATE TABLE foo
(
id int NOT NULL PRIMARY KEY,
Photo varbinary(max) FILESTREAM NULL,
MyRowGuidColumn uniqueidentifier NOT NULL ROWGUIDCOL
UNIQUE DEFAULT NEWID()
)
GOLet's assume that the table has already been populated with some data. Here is the code snippet that I used to retrieve the column values:strCmd = "select * from foo";
rs = stmt.executeQuery(strCmd);
if (rs.next())
{
ResultSetMetaData rsmd = rs.getMetaData();
if (null != rsmd)
{
String x = rsmd.getColumnTypeName(2);
String name = rsmd.getColumnName(2);
System.out.println("Column " + name + " is data type: " + x);
int i = rsmd.getColumnType(2);
switch (i)
{
case (java.sql.Types.LONGVARBINARY):
System.out.println("Column is of JDBC type LONGVARBINARY.");
InputStream bStream = rs.getBinaryStream(2);
byte[] y = new byte[1024];
if (null != bStream)
{
// write the retrieved filestream data as a file.
System.out.println("Write the filestream data out as a file.");
FileOutputStream fs = new FileOutputStream("c:\\myFile");
int read = bStream.read(y);
while (read != -1)
{
fs.write(y);
fs.flush();
read = bStream.read(y);
}
fs.close();
}
break;
default:
throw new Exception("Unexpected data type found: " + String.valueOf(i) + ". Expected LONGVARBINARY.");
}
} // now, update the data with a new file content.
System.out.print("Now, let's update the filestream data.");
FileInputStream iStream = new FileInputStream("c:\\testFile.xml");
rs.updateBinaryStream(2, iStream, -1);
rs.updateRow();
iStream.close();
}
rs.close();When executed as a console application, you'll get the following output:Column Photo is data type: varbinary
Column is of JDBC type LONGVARBINARY.
Write the filestream data out as a file.
Now, let's update the filestream data.As you've probably already noticed, SQL Server 2008 Filestream is simply a LONGVARBINARY to the v1.2 JDBC driver. This means you can operate on a Filestream like any other LONGVARBINARY, though I recommend that you stick with binaryStream to minize the amount of buffering the driver has to do.
http://blogs.msdn.com/b/jdbcteam/archive/2008/10/03/working-with-sql-server-2008-filestream-using-v1-2-jdbc-driver.aspx