$ arrow-flight
v57.2.0Apache Arrow Flight
Latest Update Summary
Crate
Name: arrow-flight New version: 57.0.0 Release date: 2025-10-23T11:00:57Z
Crate readme
Short description Native Rust implementation of Apache Arrow and Apache Parquet
Long description
This repository contains the Rust implementation of Apache Arrow, an in-memory columnar format, and Apache Parquet, a columnar file format. It includes core functionality for memory layout, arrays, and low-level computations in the arrow crate. The arrow-flight crate supports the Arrow-Flight IPC protocol, while the parquet crate handles the Parquet file format. A derive crate, parquet_derive, is available for generating RecordWriter/RecordReader for simple structs. The project follows Semantic Versioning and releases approximately monthly, with major versions released at most quarterly. It maintains a rolling MSRV policy, ensuring compatibility with stable Rust and updating the MSRV with a minimum of six months' notice.
Features • arrow • arrow-flight • parquet • parquet_derive • rolling MSRV
Code Examples Add to Cargo.toml
arrow = "..."
parquet = "..."
arrow-flight = "..."
parquet_derive = "..."
Core functionality (memory layout, arrays, low level computations)
use arrow::array::ArrayData;
use arrow::datatypes::{Field, Schema};
use arrow::record_batch::RecordBatch;
fn main() {
// Example schema
let schema = Schema::new(vec![
Field::new("city", arrow::datatypes::DataType::Utf8, false),
Field::new("temperature", arrow::datatypes::DataType::Float64, false),
]);
// Example arrays
let cities = arrow::array::StringArray::from(vec!["New York", "London", "Tokyo"]);
let temps = arrow::array::Float64Array::from(vec![10.5, 8.2, 15.1]);
// Create a RecordBatch
let batch = RecordBatch::from_arrays(&[cities.into_array(), temps.into_array()], &schema).unwrap();
println!("{}", batch);
}
Support for Parquet columnar file format
use arrow::datatypes::{Field, Schema};
use arrow::record_batch::RecordBatch;
use parquet::arrow::ArrowWriter;
use std::io::Cursor;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let schema = Schema::new(vec![
Field::new("col1", arrow::datatypes::DataType::Int32, false),
Field::new("col2", arrow::datatypes::DataType::Utf8, false),
]);
let batch = RecordBatch::from_arrays(
&[
arrow::array::Int32Array::from(vec![1, 2, 3]).into_array(),
arrow::array::StringArray::from(vec!["a", "b", "c"]).into_array(),
],
&schema,
)?;
let mut buf = Cursor::new(Vec::new());
let mut writer = ArrowWriter::new(&mut buf, schema.into(), None)?;
writer.write(&[&batch])?;
writer.close()?;
println!("Parquet data written successfully.");
Ok(())
}
Support for Arrow-Flight IPC protocol
use arrow_flight::flight_describer::FlightEndpoint;
use arrow_flight::Ticket;
fn main() {
let ticket = Ticket::new("example-query");
let endpoint = FlightEndpoint::new(ticket, None);
println!("Flight endpoint created for ticket: {:?}", endpoint.ticket());
}
Derive RecordWriter/RecordReader for arbitrary, simple structs
use parquet_derive::ParquetRecordHelper;
#[derive(Debug, ParquetRecordHelper)]
struct MyStruct {
id: i32,
name: String,
}
fn main() {
let record = MyStruct { id: 1, name: "test".to_string() };
println!("Derived ParquetRecordHelper for: {:?}", record);
}
Links • https://crates.io/crates/arrow • https://crates.io/crates/parquet • https://crates.io/crates/parquet-derive • https://crates.io/crates/arrow-flight • https://docs.rs/arrow/latest • https://docs.rs/parquet/latest • https://docs.rs/arrow-flight/latest • https://docs.rs/parquet-derive/latest • https://arrow.apache.org/rust • https://github.com/apache/arrow-rs-object-store • https://semver.org/ • https://github.com/apache/arrow-rs/issues/7835 • https://github.com/apache/arrow-rs/milestone/3 • https://github.com/apache/arrow-rs/milestone/5 • https://github.com/apache/arrow-rs/milestone/6 • https://github.com/apache/arrow-rs/issues/5368 • https://github.com/apache/arrow-rs/blob/main/CONTRIBUTING.md#breaking-changes • https://github.com/apache/arrow-rs/issues/6737 • https://crates.io/crates/object-store • https://crates.io/crates/datafusion • https://crates.io/crates/ballista • https://crates.io/crates/parquet_opendal • https://github.com/apache/datafusion/blob/main/README.md • https://github.com/apache/datafusion-ballista/blob/main/README.md • https://github.com/apache/arrow-rs/blob/main/integrations/parquet/README.md • https://github.com/apache/arrow-rs-object-store/blob/main/README.md • https://arrow.apache.org/community/ • https://s.apache.org/slack-invite • https://github.com/apache/arrow-rs/discussions • https://discord.gg/YAb2TdazKQ • https://github.com/apache/arrow-rs/issues • https://www.rust-lang.org/ • https://github.com/apache/arrow-rs/tree/57.0.0 • https://github.com/apache/arrow-rs/compare/56.2.0...57.0.0
https://api.github.com/repos/apache/arrow-rs/releases/256648414
Release info
Release version: 57.0.0
Release description
This release introduces significant changes and enhancements across the Arrow Rust project. A major breaking change involves the consistent use of Arc<FileEncryptionProperties> throughout, aiming to reduce the size of ParquetMetadata and avoid unnecessary copying when encryption is enabled. The arrow-avro crate has been added, bringing support for Avro data format, including new crates and roundtrip tests. Enhancements to the DataType display formatting improve readability for types like RunEndEncoded, Map, ListView, and LargeListView. The Parquet module refactors its Thrift-related code, improving efficiency and organization by using a custom thrift parser and storing encodings as a bitmask. Several bugs have been fixed, including issues with decimal casting, handling of old timestamp formats, and memory consumption in the compressed writer. Performance improvements are noted in areas such as zstd compression context reuse and SIMD optimizations for byte builders. Documentation has been updated, and the project has migrated to Rust 2024 edition.
Code Examples
Minor update: 57.1.0 → 57.2.0
$ DOWNLOADS TREND
$ VERSION HISTORY
$ LINKS
$ INSTALL
cargo add arrow-flightOr add to Cargo.toml: arrow-flight = "57.2.0"