如题,我依据下面文章来的,但是不成功,不知道哪个大神实现了。第一个问题是: 我怎样才能压缩那些不在文件中的数据.第二个问题是: 我以极大的热情阅读了Todd Sundsted的"压缩你的数据,从而提高你的网络应用程序的性能",但是读完后我却有点失望.当我读到文章标题时我很高兴.我想我总算找到了解决问题的办法了. 在我们的公司,我们试图提高一个组织数据的RMI应用程序的性能.服务器端进行了绝大部分的处理和优化.我们花了一年半的时间去提高性能,但是现在看来瓶颈在于数据的传输上.在一天的任何时间内,我们都有可能在客户和服务器之间传送成千上万的数据. 一种可能的解决办法,我建议我们能够在把数据返回给客户端时先压缩这些数据,这在Todd的文章中已经说得很清楚了.但是,文章中的例子却是压缩文件,而不是我们所需要的----对数据进行压缩. 在RMI中的实现中,我们先从数据库取得数据,再把数据放入一个列表中,接着把这个列表返回给客户端,最后再把它们插入JTable中.我想在把数据返回给客户时,首先把列表中的数据压缩,然后在客户端解压缩,最后把数据插入到表格中.这样的想法可行吗?A:最近我收到了一些关于Todd的文章的疑问.很多读者看起来对文章中的举例很疑惑.因为文章中的例子是以文件压缩为核心的. 首先回答第一个问题,当你使用ZipInputStream 和 ZipOutputStream 并没有强制你必须使用文件.唯一要注意的是你必须把数据转换为字节数组的形式. 第二个问题比较棘手.在网络中,以RMI方式通信就需要作一些调整了.为了在传送数据之前就让RMI进行数据压缩,你必须创建一个能够压缩数据的新的套接字.然后,当你创建了一个套接字后,你得告诉RMI使用这一套接字. 以下是创建一个RMI形式的套接字的简要步骤: 1:选择或者创建一个新的套接字.(可以参看SUNS的"创建一个典型的套接字"). 2:创建一个服务器端的套接字. 3:创建一个RMIClientSocketFactory 4:创建一个RMIServerSocketFactory 5:创建一个继承了UnicastRemoteObjec的远程对象,从而使用新的factories. 根据这一大致的想法,我们来看每一步如何具体的实现.步骤1: 创建ZipSocket 由于要进行Zip压缩,我们重新创建这样的套接字mport java.io.InputStream;import java.io.OutputStream;import java.util.zip.ZipInputStream;import java.util.zip.ZipOutputStream;import java.net.Socket;public class ZipSocket extends Socket { private InputStream in; private OutputStream out; public ZipSocket() { super(); } public ZipSocket(String host, int port) throws IOException { super(host, port); } public InputStream getInputStream() throws IOException { if (in == null) { in = new ZipInputStream(super.getInputStream()); } return in; } public OutputStream getOutputStream() throws IOException { if (out == null) { out = new ZipOutputStream(super.getOutputStream()); } return out; } public synchronized void close() throws IOException { OutputStream o = getOutputStream(); o.flush(); super.close(); }}步骤2: 创建ZipServerSocket import java.net.ServerSocket;import java.net.Socket;import java.io.IOException;public class ZipServerSocket extends ServerSocket{ public ZipServerSocket(int port) throws IOException { super(port); } public Socket accept() throws IOException { Socket socket = new ZipSocket(); implAccept(socket); return socket; }}步骤3:创建ZipClientSocketFactory 客户端的factory的创建必须遵循以下的形式: import java.io.IOException; import java.io.Serializable; import java.net.Socket; import java.rmi.server.RMIClientSocketFactory;public class ZipClientSocketFactory implements RMIClientSocketFactory, Serializable { public Socket createSocket(String host, int port) throws IOException { ZipSocket socket = new ZipSocket(host, port); return socket; } }步骤4:创建ZipServerSocketFactory import java.io.IOException; import java.io.Serializable; import java.net.ServerSocket; import java.rmi.server.RMIServerSocketFactory; public class ZipServerSocketFactory implements RMIServerSocketFactory, Serializable { public ServerSocket createServerSocket(int port) throws IOException { ZipServerSocket server = new ZipServerSocket(port); return server; } }步骤5: 创建一个继承了UnicastRemoteObjec的远程对象,从而使用新的factories. public class YourRMIObject extends UnicastRemoteObject {public YourRemoteObject( int port ) {super( port, new ZipClientSocketFactory(), new ZipServerSocketFactory() );}// 剩下的是你自己的程序实现}
无意中发现一个牛人冲写了这个压缩流,终于可以
package common.zip;import java.io.EOFException;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* This code is based on the ideas presented in {@link http://javatechniques.com/blog/compressing-data-sent-over-a-socket/} by
* Philip Isenhour. I am very grateful for the approach that he presented. The key idea is to avoid using the GZip and the Zip streams
* and to use the Deflater and Inflater methods directly. In addition, Philip Isenhour essentially defines a packet type that has a
* header indicating the size and compressed size of the packet contents. I took these two key ideas and wrote the following code
* by scratch without reference to Philip Isenhour's documents. I think that some version of Philip Isenhour's ideas should find their way
* into the core java libraries because otherwise people will continue struggling with this problem.
* <p>
* I have tried several other approaches to a compressing input and compressing output stream. The first approach was to base the
* input and output streams on the GZip input and output stream. There are web pages on the internet that suggest that calling the GZipOutputStream's
* finish() method during the flush() will work. I had trouble with this approach when a write occurs on the stream after
* the flush() (which calls finish()). I would get exceptions indicating that the GZip Output stream was finished and therefore unwriteable.
* <p>
* I then tried to use the ZIPInput/OutputStreams. I would flush data by creating a ZipEntry and writing it out. This approach
* actually worked very well. But it had a mysterious bug where some data was either not fully written out or not read. In the rmi
* context things would hang. This bug was relatively rare and only happened on certain machines. I never found out what the problem was.
* <p>
* The beauty of Philip Isenhour's approach is that the developer can completely control how data is flushed and fully written out. The developer
* can also ensure that on the read method all the data is fully read. So there should not be any more rmi hangs. The only issue is whether the
* deflate/inflate logic is correct. This is pretty thoroughly tested in our server-client testing (though there are *always* bugs hidden somewhere).
*
* @author tredmond
*
*/
public class PacketHeader {
public static byte [] ALIGNMENT = { 0x4c, 0x3a, 0x74, 0x58 };
private static int BYTES_IN_INT = 4;
private static int BITS_IN_BYTE = 8;
private static int BYTE_MASK = 0x0ff;
private int size;
private int compressedSize;
public PacketHeader(int size, int compressedSize) {
this.size = size;
this.compressedSize = compressedSize;
} public static PacketHeader read(InputStream is) throws IOException {
for (int i=0;i<ALIGNMENT.length;i++) {
byte b=ALIGNMENT[i];
int alignCheck = is.read();
if (alignCheck == -1) {
throw new EOFException("No packet found");
}
if ((byte) alignCheck != b) {
throw new IOException("Packet header out of alignment between reader and writer (Thread = " + Thread.currentThread().getName() + ")");
}
}
int size = readInt(is);
int compressedSize = readInt(is); return new PacketHeader(size, compressedSize);
}
public void write(OutputStream os) throws IOException {
for (int i=0;i<ALIGNMENT.length;i++) {
byte b=ALIGNMENT[i];
os.write(b);
}
writeInt(os, size);
writeInt(os, compressedSize); } public int getSize() {
return size;
} public int getCompressedSize() {
return compressedSize;
}
private static int readInt(InputStream is) throws IOException {
int result = 0;
int[] buffer = new int[BYTES_IN_INT];
for (int i = 0; i < BYTES_IN_INT; i++) {
int c = is.read();
if (c == -1) {
throw new EOFException("Could not read compressed packet header");
}
buffer[i] = c;
}
for (int i = BYTES_IN_INT - 1; i >= 0; i--) {
result = result << BITS_IN_BYTE;
int b = buffer[i];
result += b < 0 ? 256 + b : b;
}
return result;
}
private static void writeInt(OutputStream os, int v) throws IOException {
for (int i = 0; i < BYTES_IN_INT - 1; i++) {
os.write(v & BYTE_MASK);
v = v >> BITS_IN_BYTE;
}
os.write(v);
}}
import java.io.EOFException;
import java.io.IOException;
import java.io.InputStream;
import java.util.logging.Level;
import java.util.logging.Logger;
import java.util.zip.DataFormatException;
import java.util.zip.Inflater;
/**
* This code is based on the ideas presented in {@link http://javatechniques.com/blog/compressing-data-sent-over-a-socket/} by
* Philip Isenhour. I am very grateful for the approach that he presented. The key idea is to avoid using the GZip and the Zip streams
* and to use the Deflater and Inflater methods directly. In addition, Philip Isenhour essentially defines a packet type that has a
* header indicating the size and compressed size of the packet contents. I took these two key ideas and wrote the following code
* by scratch without reference to Philip Isenhour's documents. I think that some version of Philip Isenhour's ideas should find their way
* into the core java libraries because otherwise people will continue struggling with this problem.
* <p>
* I have tried several other approaches to a compressing input and compressing output stream. The first approach was to base the
* input and output streams on the GZip input and output stream. There are web pages on the internet that suggest that calling the GZipOutputStream's
* finish() method during the flush() will work. I had trouble with this approach when a write occurs on the stream after
* the flush() (which calls finish()). I would get exceptions indicating that the GZip Output stream was finished and therefore unwriteable.
* <p>
* I then tried to use the ZIPInput/OutputStreams. I would flush data by creating a ZipEntry and writing it out. This approach
* actually worked very well. But it had a mysterious bug where some data was either not fully written out or not read. In the rmi
* context things would hang. This bug was relatively rare and only happened on certain machines. I never found out what the problem was.
* <p>
* The beauty of Philip Isenhour's approach is that the developer can completely control how data is flushed and fully written out. The developer
* can also ensure that on the read method all the data is fully read. So there should not be any more rmi hangs. The only issue is whether the
* deflate/inflate logic is correct. This is pretty thoroughly tested in our server-client testing (though there are *always* bugs hidden somewhere).
*
* @author tredmond
*
*/
public class CompressingInputStream extends InputStream {
protected InputStream is;
protected byte buffer[];
protected int offset;
private Inflater inflater;
private static int counter = 0;
private int id;
public CompressingInputStream(InputStream is) {
this.is = is;
inflater = new Inflater();
synchronized (CompressingInputStream.class) {
id = counter++;
}
}
public int read() throws IOException {
if (buffer == null) {
fillBuffer();
}
if (buffer == null) {
return -1;
}
int ret = buffer[offset++];
if (buffer.length == offset) {
buffer = null;
}
return ret;
}
public int read(byte[] b, int off, int len) throws IOException {
if (buffer == null) {
fillBuffer();
}
if (buffer == null) {
return -1;
}
int bytesRead = 0;
for (bytesRead = 0; offset < buffer.length && bytesRead < len; bytesRead++) {
b[off++] = buffer[offset++];
}
if (buffer.length == offset) {
buffer = null;
}
return bytesRead;
}
private void fillBuffer() throws IOException {
buffer = null;
offset = 0;
PacketHeader header = PacketHeader.read(is);
buffer = new byte[header.getSize()];
fillBuffer(header);
}
protected void fillBuffer(PacketHeader header) throws IOException {
inflater.reset();
int compressedSize = header.getCompressedSize();
byte compressedBuffer[] = new byte[compressedSize]; readFully(compressedBuffer, compressedSize);
inflater.setInput(compressedBuffer);
try {
int inflatedSize = inflater.inflate(buffer);
if (inflatedSize != header.getSize()) {
throw new IOException("Inflated to the wrong size, expected "
+ header.getSize()
+ " bytes but got "
+ inflatedSize
+ " bytes");
}
}
catch (DataFormatException dfe) {
IOException ioe = new IOException("Compressed Data format bad: " + dfe.getMessage());
ioe.initCause(dfe);
throw ioe;
}
if (!inflater.needsInput()) {
throw new IOException("Inflater thinks that there is more data to decompress");
}
logPacket(compressedBuffer);
}
protected void readFully(byte[] b, int len) throws IOException {
int bytesRead = 0;
while (bytesRead < len) {
int readThisTime = is.read(b, bytesRead, len - bytesRead);
if (readThisTime == -1) {
throw new EOFException("Unabled to read entire compressed packet contents");
}
bytesRead += readThisTime;
}
}
protected void logPacket(byte [] compressedBuffer) { try {
StringBuffer sb = new StringBuffer();
sb.append("Uncompressed buffer of size ");
sb.append(buffer.length);
sb.append(": ");
for (int i = 0; i < buffer.length; i++) {
sb.append(buffer[i]);
sb.append(" ");
} sb = new StringBuffer();
sb.append("Compressed buffer of size ");
sb.append(compressedBuffer.length);
sb.append(": ");
for (int i = 0; i < compressedBuffer.length; i++) {
sb.append(compressedBuffer[i]);
sb.append(" ");
} }
catch (Throwable t) {
}
}}
import java.io.IOException;
import java.io.OutputStream;
import java.util.zip.Deflater;
/**
* This code is based on the ideas presented in {@link http://javatechniques.com/blog/compressing-data-sent-over-a-socket/} by
* Philip Isenhour. I am very grateful for the approach that he presented. The key idea is to avoid using the GZip and the Zip streams
* and to use the Deflater and Inflater methods directly. In addition, Philip Isenhour essentially defines a packet type that has a
* header indicating the size and compressed size of the packet contents. I took these two key ideas and wrote the following code
* by scratch without reference to Philip Isenhour's documents. I think that some version of Philip Isenhour's ideas should find their way
* into the core java libraries because otherwise people will continue struggling with this problem.
* <p>
* I have tried several other approaches to a compressing input and compressing output stream. The first approach was to base the
* input and output streams on the GZip input and output stream. There are web pages on the internet that suggest that calling the GZipOutputStream's
* finish() method during the flush() will work. I had trouble with this approach when a write occurs on the stream after
* the flush() (which calls finish()). I would get exceptions indicating that the GZip Output stream was finished and therefore unwriteable.
* <p>
* I then tried to use the ZIPInput/OutputStreams. I would flush data by creating a ZipEntry and writing it out. This approach
* actually worked very well. But it had a mysterious bug where some data was either not fully written out or not read. In the rmi
* context things would hang. This bug was relatively rare and only happened on certain machines. I never found out what the problem was.
* <p>
* The beauty of Philip Isenhour's approach is that the developer can completely control how data is flushed and fully written out. The developer
* can also ensure that on the read method all the data is fully read. So there should not be any more rmi hangs. The only issue is whether the
* deflate/inflate logic is correct. This is pretty thoroughly tested in our server-client testing (though there are *always* bugs hidden somewhere).
*
* @author tredmond
*
*/
public class CompressingOutputStream extends OutputStream {
public static int COMPRESSION_PAD = 1024;
public static int BUFFER_SIZE = 128 * 1024;
public static int KB = 1024;
protected OutputStream os;
private Deflater deflater;
private static int counter = 0;
private int id;
protected byte buffer[] = new byte[BUFFER_SIZE];
protected int offset = 0;
private static int totalBytesWritten = 0;
private static int totalCompressedBytesWritten = 0;
public CompressingOutputStream(OutputStream os) {
this.os = os;
deflater = new Deflater();
synchronized (CompressingOutputStream.class) {
id = counter++;
}
}
public void write(int b) throws IOException {
ensureBufferNotFull();
buffer[offset++] = (byte) b;
ensureBufferNotFull();
}
public void flush() throws IOException {
try {
if (offset == 0) {
;
}
else {
deflater.reset();
deflater.setInput(buffer, 0, offset);
deflater.finish();
byte [] compressedBuffer = new byte[offset + COMPRESSION_PAD];
deflater.deflate(compressedBuffer);
if (!deflater.needsInput()) {
throw new IOException("Insufficient pad for compression");
}
int compressedSize = (int) deflater.getTotalOut();
PacketHeader header = new PacketHeader((int) deflater.getTotalIn(),
(int) compressedSize);
logPacket(compressedBuffer, compressedSize);
header.write(os);
os.write(compressedBuffer, 0, compressedSize);
}
}
finally {
offset = 0;
}
os.flush();
} private void ensureBufferNotFull() throws IOException {
if (offset >= BUFFER_SIZE) {
flush();
}
}
protected void logPacket(byte [] compressedBuffer, int compressedSize) {
logCompressionRatios(compressedSize); try { StringBuffer sb = new StringBuffer();
sb.append("Uncompressed buffer of size ");
sb.append(offset);
if (compressedSize > offset) {
sb.append(" (compression increased size)");
}
sb.append(": ");
for (int i = 0; i < offset; i++) {
sb.append(buffer[i]);
sb.append(" ");
}
sb = new StringBuffer();
sb.append("Compressed buffer of size ");
sb.append(compressedSize);
sb.append(": ");
for (int i = 0; i < compressedSize; i++) {
sb.append(compressedBuffer[i]);
sb.append(" ");
} }
catch (Throwable t) {
}
} private void logCompressionRatios(int compressedSize) { synchronized (CompressingOutputStream.class) {
int previousMB = totalBytesWritten / (KB * KB);
totalBytesWritten += offset;
totalCompressedBytesWritten += compressedSize;
if (previousMB < (totalBytesWritten / (KB * KB))) {
}
}
}
}