or4cl3ai/Aiden_t5
Text Generation
•
Updated
•
817
•
14
hexsha
stringlengths 40
40
| size
int64 2
1.05M
| content
stringlengths 2
1.05M
| avg_line_length
float64 1.33
100
| max_line_length
int64 1
1k
| alphanum_fraction
float64 0.25
1
|
---|---|---|---|---|---|
fe711453e5de0906f307c1df4e550e1d5f6f946a | 13,760 | //! Core traits for rule definitions and rule context.
//! As well as an internal prelude to make imports for rules easier.
#![allow(unused_variables, unused_imports)]
use crate::autofix::Fixer;
use crate::Diagnostic;
use dyn_clone::DynClone;
use rslint_errors::Severity;
use rslint_parser::{SyntaxNode, SyntaxNodeExt, SyntaxToken};
use rslint_text_edit::apply_indels;
use serde::{Deserialize, Serialize};
use std::borrow::Borrow;
use std::fmt::Debug;
use std::marker::{Send, Sync};
use std::ops::{Deref, DerefMut, Drop};
use std::rc::Rc;
use std::sync::Arc;
/// The main type of rule run by the runner. The rule takes individual
/// nodes inside of a Concrete Syntax Tree and checks them.
/// It may also take individual syntax tokens.
/// Rule must be all be [`Send`] + [`Sync`], because rules are run in parallel.
///
/// # Rule Level Configuration
/// Rules do not know about the lint level they were configured for, the runner
/// runs the rules, then maps any error/warning diagnostics to their appropriate severity.
/// This saves on boilerplate code for getting the appropriate diagnostic builder type and config.
///
/// # Guidelines
/// This is a list of guidelines and tips you should generally follow when implementing a rule:
/// - Do not use text based equality, it is inaccurate, instead use [`lexical_eq`](SyntaxNodeExt::lexical_eq).
/// - Avoid using `text_range` on nodes, it is inaccurate because it may include whitespace, instead use [`trimmed_range`](SyntaxNodeExt::trimmed_range).
/// - Avoid using `text` on nodes for the same reason as the previous, use [`trimmed_text`](SyntaxNodeExt::trimmed_text).
/// - If you can offer better diagnostics and more context around a rule error, __always__ do it! It is a central goal
/// of the project to offer very helpful diagnostics.
/// - Do not be afraid to clone syntax nodes, ast nodes, and syntax tokens. They are all backed by an [`Rc`](std::rc::Rc) around Node data.
/// therefore they can be cheaply cloned (but if you can, have your functions take a reference since Rc cloning is not zero cost).
/// - Do not try to rely on the result of other rules, it is impossible because rules are run at the same time.
/// - Do not rely on file data of different files. There is a separate rule type for this.
/// - Do not unwrap pieces of an AST node (sometimes it is ok because they are guaranteed to be there), since that will cause panics
/// with error recovery.
/// - Do not use node or string coloring outside of diagnostic notes, it messes with termcolor and ends up looking horrible.
#[typetag::serde]
pub trait CstRule: Rule {
/// Check an individual node in the syntax tree.
/// You can use the `match_ast` macro to make matching a node to an ast node easier.
/// The reason this uses nodes and not a visitor is because nodes are more flexible,
/// converting them to an AST node has zero cost and you can easily traverse surrounding nodes.
/// Defaults to doing nothing.
///
/// The return type is `Option<()>` to allow usage of `?` on the properties of AST nodes which are all optional.
#[inline]
fn check_node(&self, node: &SyntaxNode, ctx: &mut RuleCtx) -> Option<()> {
None
}
/// Check an individual token in the syntax tree.
/// Defaults to doing nothing.
#[inline]
fn check_token(&self, token: &SyntaxToken, ctx: &mut RuleCtx) -> Option<()> {
None
}
/// Check the root of the tree one time.
/// This method is guaranteed to only be called once.
/// The root's kind will be either `SCRIPT` or `MODULE`.
/// Defaults to doing nothing.
#[inline]
fn check_root(&self, root: &SyntaxNode, ctx: &mut RuleCtx) -> Option<()> {
None
}
}
/// A generic trait which describes things common to a rule regardless on what they run on.
///
/// Each rule should have a `new` function for easy instantiation. We however do not require this
/// for the purposes of allowing more complex rules to instantiate themselves in a different way.
/// However the rules must be easily instantiated because of rule groups.
pub trait Rule: Debug + DynClone + Send + Sync {
/// A unique, kebab-case name for the rule.
fn name(&self) -> &'static str;
/// The name of the group this rule belongs to.
fn group(&self) -> &'static str;
/// Optional docs for the rule, an empty string by default
fn docs(&self) -> &'static str {
""
}
}
dyn_clone::clone_trait_object!(Rule);
dyn_clone::clone_trait_object!(CstRule);
/// A trait describing rules for which their configuration can be automatically deduced (inferred) using
/// parsed syntax trees
#[typetag::serde]
pub trait Inferable: CstRule {
/// Infer the options for the rule from multiple nodes (which may be from different trees) and change them
fn infer(&mut self, nodes: &[SyntaxNode]);
}
/// The level configured for a rule.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub enum RuleLevel {
Warning,
Error,
}
/// Context given to a rule when running it.
// This is passed by reference and not by Arc, which is very important,
// Arcs are very expensive to copy, and for 50 rules running on 50 files we will have a total of
// 2500 copies, which is non ideal at best.
#[derive(Debug, Clone)]
pub struct RuleCtx {
/// The file id of the file being linted.
pub file_id: usize,
/// Whether the linter is run with the `--verbose` option.
/// Which dictates whether the linter should include more (potentially spammy) context in diagnostics.
pub verbose: bool,
/// An empty vector of diagnostics which the rule adds to.
pub diagnostics: Vec<Diagnostic>,
pub fixer: Option<Fixer>,
pub src: Arc<String>,
}
impl RuleCtx {
/// Make a new diagnostic builder.
pub fn err(&mut self, code: impl Into<String>, message: impl Into<String>) -> Diagnostic {
Diagnostic::error(self.file_id, code.into(), message.into())
}
pub fn add_err(&mut self, diagnostic: Diagnostic) {
self.diagnostics.push(diagnostic)
}
/// Make a new fixer for this context and return a mutable reference to it
pub fn fix(&mut self) -> &mut Fixer {
let fixer = Fixer::new(self.src.clone());
self.fixer = Some(fixer);
self.fixer.as_mut().unwrap()
}
/// Create a context which is used to simply run a rule without needing to know about
/// the resulting fixer, therefore the ctx's source is not a valid source
pub(crate) fn dummy_ctx() -> Self {
Self {
file_id: 0,
verbose: false,
diagnostics: vec![],
fixer: None,
src: Arc::new(String::new()),
}
}
}
/// The result of running a single rule on a syntax tree.
#[derive(Debug, Clone)]
pub struct RuleResult {
pub diagnostics: Vec<Diagnostic>,
pub fixer: Option<Fixer>,
}
impl RuleResult {
/// Make a new rule result with diagnostics and an optional fixer.
pub fn new(diagnostics: Vec<Diagnostic>, fixer: impl Into<Option<Fixer>>) -> Self {
Self {
diagnostics,
fixer: fixer.into(),
}
}
/// Get the result of running this rule.
pub fn outcome(&self) -> Outcome {
Outcome::from(&self.diagnostics)
}
/// Merge two results, this will join `self` and `other`'s diagnostics and take
/// `self`'s fixer if available or otherwise take `other`'s fixer
pub fn merge(self, other: RuleResult) -> RuleResult {
RuleResult {
diagnostics: [self.diagnostics, other.diagnostics].concat(),
fixer: self.fixer.or(other.fixer),
}
}
/// Attempt to fix the issue if the rule can be autofixed.
pub fn fix(&self) -> Option<String> {
self.fixer.as_ref().map(|x| x.apply())
}
}
/// The overall result of running a single rule or linting a file.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum Outcome {
/// Running the rule resulted in one or more errors.
/// The rule result may have also included warnings or notes.
Failure,
/// Running the rule resulted in one or more warnings.
/// May also include notes.
Warning,
/// Running the rule resulted in no errors or warnings.
/// May include note diagnostics (which are very rare).
Success,
}
impl<T> From<T> for Outcome
where
T: IntoIterator,
T::Item: Borrow<Diagnostic>,
{
fn from(diagnostics: T) -> Self {
let mut outcome = Outcome::Success;
for diagnostic in diagnostics {
match diagnostic.borrow().severity {
Severity::Error => outcome = Outcome::Failure,
Severity::Warning if outcome != Outcome::Failure => outcome = Outcome::Warning,
_ => {}
}
}
outcome
}
}
impl Outcome {
pub fn merge(outcomes: impl IntoIterator<Item = impl Borrow<Outcome>>) -> Outcome {
let mut overall = Outcome::Success;
for outcome in outcomes {
match outcome.borrow() {
Outcome::Failure => overall = Outcome::Failure,
Outcome::Warning if overall != Outcome::Failure => overall = Outcome::Warning,
_ => {}
}
}
overall
}
}
#[macro_export]
#[doc(hidden)]
macro_rules! __pre_parse_docs_from_meta {
(
@$cb:tt
@[docs $($docs:tt)*]
@$other:tt
#[doc = $doc:expr]
$($rest:tt)*
) => (
$crate::__pre_parse_docs_from_meta! {
@$cb
@[docs $($docs)* $doc]
@$other
$($rest)*
}
);
(
@$cb:tt
@$docs:tt
@[others $($others:tt)*]
#[$other:meta]
$($rest:tt)*
) => (
$crate::__pre_parse_docs_from_meta! {
@$cb
@$docs
@[others $($others)* $other]
$($rest)*
}
);
(
@[cb $($cb:tt)*]
@[docs $($docs:tt)*]
@[others $($others:tt)*]
$($rest:tt)*
) => (
$($cb)* ! {
#[doc = concat!($(indoc::indoc!($docs), "\n"),*)]
$(
#[$others]
)*
$($rest)*
}
);
(
$(:: $(@ $colon:tt)?)? $($cb:ident)::+ ! {
$($input:tt)*
}
) => (
$crate::__pre_parse_docs_from_meta! {
@[cb $(:: $($colon)?)? $($cb)::+]
@[docs ]
@[others ]
$($input)*
}
);
}
#[macro_export]
#[doc(hidden)]
macro_rules! __declare_lint_inner {
(
#[doc = $doc:expr]
$(#[$outer:meta])*
// The rule struct name
$name:ident,
$group:ident,
// A unique kebab-case name for the rule
$code:expr
$(,
// Any fields for the rule
$(
$(#[$inner:meta])*
$visibility:vis $key:ident : $val:ty
),* $(,)?
)?
) => {
use $crate::Rule;
use serde::{Deserialize, Serialize};
#[doc = $doc]
#[serde(rename_all = "camelCase")]
$(#[$outer])*
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct $name {
$(
$(
$(#[$inner])*
pub $key: $val
),
*)?
}
impl $name {
pub fn new() -> Self {
Self::default()
}
}
impl Rule for $name {
fn name(&self) -> &'static str {
$code
}
fn group(&self) -> &'static str {
stringify!($group)
}
fn docs(&self) -> &'static str {
$doc
}
}
};
}
/// A macro to easily generate rule boilerplate code.
///
/// ```ignore
/// declare_lint! {
/// /// A description of the rule here
/// /// This will be used as the doc for the rule struct
/// RuleName,
/// // The name of the group this rule belongs to.
/// groupname,
/// // Make sure this is kebab-case and unique.
/// "rule-name",
/// /// A description of the attribute here, used for config docs.
/// pub config_attr: u8,
/// pub another_attr: String
/// }
/// ```
///
/// # Rule name and docs
///
/// The macro's first argument is an identifier for the rule structure.
/// This should always be a PascalCase name. You will have to either derive Default for the struct
/// or implement it manually.
///
/// The macro also accepts any doc comments for the rule name. These comments
/// are then used by an xtask script to generate markdown files for user facing docs.
/// Each rule doc should include an `Incorrect Code Examples` header. It may also optionally
/// include a `Correct Code Examples`. Do not include a `Config` header, it is autogenerated
/// from config field docs.
///
/// # Config
///
/// After the rule code, the macro accepts fields for the struct. Any field which is
/// public will be used for config, you can however disable this by using `#[serde(skip)]`.
/// Every public (config) field should have a doc comment, the doc comments will be used for
/// user facing documentation. Therefore try to be non technical and non rust specific with the doc comments.
/// **All config fields will be renamed to camelCase**
///
///
/// This will generate a rule struct with `RuleName`,
/// and use the optional config attributes defined for the config of the rule.
/// You must make sure each config field is Deserializable.
#[macro_export]
macro_rules! declare_lint {
($($input:tt)*) => {
$crate::__pre_parse_docs_from_meta! {
$crate::__declare_lint_inner! { $($input)* }
}
};
}
| 33.891626 | 153 | 0.605596 |
1cfef2eee17feaab71f83fdc40ff3b1f44ca95f4 | 3,201 | #![allow(clippy::module_inception)]
#![allow(clippy::upper_case_acronyms)]
#![allow(clippy::large_enum_variant)]
#![allow(clippy::wrong_self_convention)]
#![allow(clippy::should_implement_trait)]
#![allow(clippy::blacklisted_name)]
#![allow(clippy::vec_init_then_push)]
#![allow(rustdoc::bare_urls)]
#![warn(missing_docs)]
//! <p></p>
//! <p>Amazon Managed Blockchain is a fully managed service for creating and managing blockchain networks using open-source frameworks. Blockchain allows you to build applications where multiple parties can securely and transparently run transactions and share data without the need for a trusted, central authority.</p>
//! <p>Managed Blockchain supports the Hyperledger Fabric and Ethereum open-source frameworks. Because of fundamental differences between the frameworks, some API actions or data types may only apply in the context of one framework and not the other. For example, actions related to Hyperledger Fabric network members such as <code>CreateMember</code> and <code>DeleteMember</code> do not apply to Ethereum.</p>
//! <p>The description for each action indicates the framework or frameworks to which it applies. Data types and properties that apply only in the context of a particular framework are similarly indicated.</p>
//!
//! # Crate Organization
//!
//! The entry point for most customers will be [`Client`]. [`Client`] exposes one method for each API offered
//! by the service.
//!
//! Some APIs require complex or nested arguments. These exist in [`model`].
//!
//! Lastly, errors that can be returned by the service are contained within [`error`]. [`Error`] defines a meta
//! error encompassing all possible errors that can be returned by the service.
//!
//! The other modules within this crate and not required for normal usage.
//!
//! # Examples
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
pub use error_meta::Error;
pub use config::Config;
mod aws_endpoint;
/// Client and fluent builders for calling the service.
#[cfg(feature = "client")]
pub mod client;
/// Configuration for the service.
pub mod config;
/// Errors that can occur when calling the service.
pub mod error;
mod error_meta;
mod idempotency_token;
/// Input structures for operations.
pub mod input;
mod json_deser;
mod json_errors;
mod json_ser;
/// Data structures used by operation inputs/outputs.
pub mod model;
mod no_credentials;
/// All operations that this crate can perform.
pub mod operation;
mod operation_deser;
mod operation_ser;
/// Output structures for operations.
pub mod output;
/// Crate version number.
pub static PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
pub use aws_smithy_http::byte_stream::ByteStream;
pub use aws_smithy_http::result::SdkError;
pub use aws_smithy_types::Blob;
pub use aws_smithy_types::DateTime;
static API_METADATA: aws_http::user_agent::ApiMetadata =
aws_http::user_agent::ApiMetadata::new("managedblockchain", PKG_VERSION);
pub use aws_smithy_http::endpoint::Endpoint;
pub use aws_smithy_types::retry::RetryConfig;
pub use aws_types::app_name::AppName;
pub use aws_types::region::Region;
pub use aws_types::Credentials;
#[cfg(feature = "client")]
pub use client::Client;
| 43.849315 | 411 | 0.768822 |
ed68b198b84f7e44051fee6fa59cc53e8140da5f | 18,688 | extern crate serde;
extern crate rltk;
use rltk::{Console, GameState, Rltk, Point};
extern crate specs;
use specs::prelude::*;
use specs::saveload::{SimpleMarker, SimpleMarkerAllocator};
#[macro_use]
extern crate specs_derive;
mod components;
pub use components::*;
mod map;
pub use map::*;
mod player;
use player::*;
mod rect;
pub use rect::Rect;
mod visibility_system;
use visibility_system::VisibilitySystem;
mod monster_ai_system;
use monster_ai_system::MonsterAI;
mod map_indexing_system;
use map_indexing_system::MapIndexingSystem;
mod melee_combat_system;
use melee_combat_system::MeleeCombatSystem;
mod damage_system;
use damage_system::DamageSystem;
mod gui;
mod gamelog;
mod spawner;
mod inventory_system;
use inventory_system::{ ItemCollectionSystem, ItemUseSystem, ItemDropSystem, ItemRemoveSystem };
pub mod saveload_system;
pub mod random_table;
pub mod particle_system;
pub mod hunger_system;
pub mod rex_assets;
pub mod trigger_system;
pub mod map_builders;
const SHOW_MAPGEN_VISUALIZER : bool = false;
#[derive(PartialEq, Copy, Clone)]
pub enum RunState { AwaitingInput,
PreRun,
PlayerTurn,
MonsterTurn,
ShowInventory,
ShowDropItem,
ShowTargeting { range : i32, item : Entity},
MainMenu { menu_selection : gui::MainMenuSelection },
SaveGame,
NextLevel,
ShowRemoveItem,
GameOver,
MagicMapReveal { row : i32 },
MapGeneration
}
pub struct State {
pub ecs: World,
mapgen_next_state : Option<RunState>,
mapgen_history : Vec<Map>,
mapgen_index : usize,
mapgen_timer : f32
}
impl State {
fn run_systems(&mut self) {
let mut vis = VisibilitySystem{};
vis.run_now(&self.ecs);
let mut mob = MonsterAI{};
mob.run_now(&self.ecs);
let mut mapindex = MapIndexingSystem{};
mapindex.run_now(&self.ecs);
let mut triggers = trigger_system::TriggerSystem{};
triggers.run_now(&self.ecs);
let mut melee = MeleeCombatSystem{};
melee.run_now(&self.ecs);
let mut damage = DamageSystem{};
damage.run_now(&self.ecs);
let mut pickup = ItemCollectionSystem{};
pickup.run_now(&self.ecs);
let mut itemuse = ItemUseSystem{};
itemuse.run_now(&self.ecs);
let mut drop_items = ItemDropSystem{};
drop_items.run_now(&self.ecs);
let mut item_remove = ItemRemoveSystem{};
item_remove.run_now(&self.ecs);
let mut hunger = hunger_system::HungerSystem{};
hunger.run_now(&self.ecs);
let mut particles = particle_system::ParticleSpawnSystem{};
particles.run_now(&self.ecs);
self.ecs.maintain();
}
}
impl GameState for State {
fn tick(&mut self, ctx : &mut Rltk) {
let mut newrunstate;
{
let runstate = self.ecs.fetch::<RunState>();
newrunstate = *runstate;
}
ctx.cls();
particle_system::cull_dead_particles(&mut self.ecs, ctx);
match newrunstate {
RunState::MainMenu{..} => {}
RunState::GameOver{..} => {}
_ => {
draw_map(&self.ecs.fetch::<Map>(), ctx);
let positions = self.ecs.read_storage::<Position>();
let renderables = self.ecs.read_storage::<Renderable>();
let hidden = self.ecs.read_storage::<Hidden>();
let map = self.ecs.fetch::<Map>();
let mut data = (&positions, &renderables, !&hidden).join().collect::<Vec<_>>();
data.sort_by(|&a, &b| b.1.render_order.cmp(&a.1.render_order) );
for (pos, render, _hidden) in data.iter() {
let idx = map.xy_idx(pos.x, pos.y);
if map.visible_tiles[idx] { ctx.set(pos.x, pos.y, render.fg, render.bg, render.glyph) }
}
gui::draw_ui(&self.ecs, ctx);
}
}
match newrunstate {
RunState::MapGeneration => {
if !SHOW_MAPGEN_VISUALIZER {
newrunstate = self.mapgen_next_state.unwrap();
} else {
ctx.cls();
draw_map(&self.mapgen_history[self.mapgen_index], ctx);
self.mapgen_timer += ctx.frame_time_ms;
if self.mapgen_timer > 200.0 {
self.mapgen_timer = 0.0;
self.mapgen_index += 1;
if self.mapgen_index >= self.mapgen_history.len() {
//self.mapgen_index -= 1;
newrunstate = self.mapgen_next_state.unwrap();
}
}
}
}
RunState::PreRun => {
self.run_systems();
self.ecs.maintain();
newrunstate = RunState::AwaitingInput;
}
RunState::AwaitingInput => {
newrunstate = player_input(self, ctx);
}
RunState::PlayerTurn => {
self.run_systems();
self.ecs.maintain();
match *self.ecs.fetch::<RunState>() {
RunState::MagicMapReveal{ .. } => newrunstate = RunState::MagicMapReveal{ row: 0 },
_ => newrunstate = RunState::MonsterTurn
}
}
RunState::MonsterTurn => {
self.run_systems();
self.ecs.maintain();
newrunstate = RunState::AwaitingInput;
}
RunState::ShowInventory => {
let result = gui::show_inventory(self, ctx);
match result.0 {
gui::ItemMenuResult::Cancel => newrunstate = RunState::AwaitingInput,
gui::ItemMenuResult::NoResponse => {}
gui::ItemMenuResult::Selected => {
let item_entity = result.1.unwrap();
let is_ranged = self.ecs.read_storage::<Ranged>();
let is_item_ranged = is_ranged.get(item_entity);
if let Some(is_item_ranged) = is_item_ranged {
newrunstate = RunState::ShowTargeting{ range: is_item_ranged.range, item: item_entity };
} else {
let mut intent = self.ecs.write_storage::<WantsToUseItem>();
intent.insert(*self.ecs.fetch::<Entity>(), WantsToUseItem{ item: item_entity, target: None }).expect("Unable to insert intent");
newrunstate = RunState::PlayerTurn;
}
}
}
}
RunState::ShowDropItem => {
let result = gui::drop_item_menu(self, ctx);
match result.0 {
gui::ItemMenuResult::Cancel => newrunstate = RunState::AwaitingInput,
gui::ItemMenuResult::NoResponse => {}
gui::ItemMenuResult::Selected => {
let item_entity = result.1.unwrap();
let mut intent = self.ecs.write_storage::<WantsToDropItem>();
intent.insert(*self.ecs.fetch::<Entity>(), WantsToDropItem{ item: item_entity }).expect("Unable to insert intent");
newrunstate = RunState::PlayerTurn;
}
}
}
RunState::ShowRemoveItem => {
let result = gui::remove_item_menu(self, ctx);
match result.0 {
gui::ItemMenuResult::Cancel => newrunstate = RunState::AwaitingInput,
gui::ItemMenuResult::NoResponse => {}
gui::ItemMenuResult::Selected => {
let item_entity = result.1.unwrap();
let mut intent = self.ecs.write_storage::<WantsToRemoveItem>();
intent.insert(*self.ecs.fetch::<Entity>(), WantsToRemoveItem{ item: item_entity }).expect("Unable to insert intent");
newrunstate = RunState::PlayerTurn;
}
}
}
RunState::ShowTargeting{range, item} => {
let result = gui::ranged_target(self, ctx, range);
match result.0 {
gui::ItemMenuResult::Cancel => newrunstate = RunState::AwaitingInput,
gui::ItemMenuResult::NoResponse => {}
gui::ItemMenuResult::Selected => {
let mut intent = self.ecs.write_storage::<WantsToUseItem>();
intent.insert(*self.ecs.fetch::<Entity>(), WantsToUseItem{ item, target: result.1 }).expect("Unable to insert intent");
newrunstate = RunState::PlayerTurn;
}
}
}
RunState::MainMenu{ .. } => {
let result = gui::main_menu(self, ctx);
match result {
gui::MainMenuResult::NoSelection{ selected } => newrunstate = RunState::MainMenu{ menu_selection: selected },
gui::MainMenuResult::Selected{ selected } => {
match selected {
gui::MainMenuSelection::NewGame => newrunstate = RunState::PreRun,
gui::MainMenuSelection::LoadGame => {
saveload_system::load_game(&mut self.ecs);
newrunstate = RunState::AwaitingInput;
saveload_system::delete_save();
}
gui::MainMenuSelection::Quit => { ::std::process::exit(0); }
}
}
}
}
RunState::GameOver => {
let result = gui::game_over(ctx);
match result {
gui::GameOverResult::NoSelection => {}
gui::GameOverResult::QuitToMenu => {
self.game_over_cleanup();
newrunstate = RunState::MapGeneration;
self.mapgen_next_state = Some(RunState::MainMenu{ menu_selection: gui::MainMenuSelection::NewGame });
}
}
}
RunState::SaveGame => {
saveload_system::save_game(&mut self.ecs);
newrunstate = RunState::MainMenu{ menu_selection : gui::MainMenuSelection::LoadGame };
}
RunState::NextLevel => {
self.goto_next_level();
self.mapgen_next_state = Some(RunState::PreRun);
newrunstate = RunState::MapGeneration;
}
RunState::MagicMapReveal{row} => {
let mut map = self.ecs.fetch_mut::<Map>();
for x in 0..MAPWIDTH {
let idx = map.xy_idx(x as i32,row);
map.revealed_tiles[idx] = true;
}
if row as usize == MAPHEIGHT-1 {
newrunstate = RunState::MonsterTurn;
} else {
newrunstate = RunState::MagicMapReveal{ row: row+1 };
}
}
}
{
let mut runwriter = self.ecs.write_resource::<RunState>();
*runwriter = newrunstate;
}
damage_system::delete_the_dead(&mut self.ecs);
}
}
impl State {
fn entities_to_remove_on_level_change(&mut self) -> Vec<Entity> {
let entities = self.ecs.entities();
let player = self.ecs.read_storage::<Player>();
let backpack = self.ecs.read_storage::<InBackpack>();
let player_entity = self.ecs.fetch::<Entity>();
let equipped = self.ecs.read_storage::<Equipped>();
let mut to_delete : Vec<Entity> = Vec::new();
for entity in entities.join() {
let mut should_delete = true;
// Don't delete the player
let p = player.get(entity);
if let Some(_p) = p {
should_delete = false;
}
// Don't delete the player's equipment
let bp = backpack.get(entity);
if let Some(bp) = bp {
if bp.owner == *player_entity {
should_delete = false;
}
}
let eq = equipped.get(entity);
if let Some(eq) = eq {
if eq.owner == *player_entity {
should_delete = false;
}
}
if should_delete {
to_delete.push(entity);
}
}
to_delete
}
fn goto_next_level(&mut self) {
// Delete entities that aren't the player or his/her equipment
let to_delete = self.entities_to_remove_on_level_change();
for target in to_delete {
self.ecs.delete_entity(target).expect("Unable to delete entity");
}
// Build a new map and place the player
let current_depth;
{
let worldmap_resource = self.ecs.fetch::<Map>();
current_depth = worldmap_resource.depth;
}
self.generate_world_map(current_depth + 1);
// Notify the player and give them some health
let player_entity = self.ecs.fetch::<Entity>();
let mut gamelog = self.ecs.fetch_mut::<gamelog::GameLog>();
gamelog.entries.push("You descend to the next level, and take a moment to heal.".to_string());
let mut player_health_store = self.ecs.write_storage::<CombatStats>();
let player_health = player_health_store.get_mut(*player_entity);
if let Some(player_health) = player_health {
player_health.hp = i32::max(player_health.hp, player_health.max_hp / 2);
}
}
fn game_over_cleanup(&mut self) {
// Delete everything
let mut to_delete = Vec::new();
for e in self.ecs.entities().join() {
to_delete.push(e);
}
for del in to_delete.iter() {
self.ecs.delete_entity(*del).expect("Deletion failed");
}
// Spawn a new player
{
let player_entity = spawner::player(&mut self.ecs, 0, 0);
let mut player_entity_writer = self.ecs.write_resource::<Entity>();
*player_entity_writer = player_entity;
}
// Build a new map and place the player
self.generate_world_map(1);
}
fn generate_world_map(&mut self, new_depth : i32) {
self.mapgen_index = 0;
self.mapgen_timer = 0.0;
self.mapgen_history.clear();
let mut rng = self.ecs.write_resource::<rltk::RandomNumberGenerator>();
let mut builder = map_builders::random_builder(new_depth, &mut rng);
builder.build_map(&mut rng);
self.mapgen_history = builder.build_data.history.clone();
let player_start;
{
let mut worldmap_resource = self.ecs.write_resource::<Map>();
*worldmap_resource = builder.build_data.map.clone();
player_start = builder.build_data.starting_position.as_mut().unwrap().clone();
}
// Spawn bad guys
std::mem::drop(rng);
builder.spawn_entities(&mut self.ecs);
// Place the player and update resources
let (player_x, player_y) = (player_start.x, player_start.y);
let mut player_position = self.ecs.write_resource::<Point>();
*player_position = Point::new(player_x, player_y);
let mut position_components = self.ecs.write_storage::<Position>();
let player_entity = self.ecs.fetch::<Entity>();
let player_pos_comp = position_components.get_mut(*player_entity);
if let Some(player_pos_comp) = player_pos_comp {
player_pos_comp.x = player_x;
player_pos_comp.y = player_y;
}
// Mark the player's visibility as dirty
let mut viewshed_components = self.ecs.write_storage::<Viewshed>();
let vs = viewshed_components.get_mut(*player_entity);
if let Some(vs) = vs {
vs.dirty = true;
}
}
}
fn main() {
use rltk::RltkBuilder;
let mut context = RltkBuilder::simple80x50()
.with_title("Roguelike Tutorial")
.build();
context.with_post_scanlines(true);
let mut gs = State {
ecs: World::new(),
mapgen_next_state : Some(RunState::MainMenu{ menu_selection: gui::MainMenuSelection::NewGame }),
mapgen_index : 0,
mapgen_history: Vec::new(),
mapgen_timer: 0.0
};
gs.ecs.register::<Position>();
gs.ecs.register::<Renderable>();
gs.ecs.register::<Player>();
gs.ecs.register::<Viewshed>();
gs.ecs.register::<Monster>();
gs.ecs.register::<Name>();
gs.ecs.register::<BlocksTile>();
gs.ecs.register::<CombatStats>();
gs.ecs.register::<WantsToMelee>();
gs.ecs.register::<SufferDamage>();
gs.ecs.register::<Item>();
gs.ecs.register::<ProvidesHealing>();
gs.ecs.register::<InflictsDamage>();
gs.ecs.register::<AreaOfEffect>();
gs.ecs.register::<Consumable>();
gs.ecs.register::<Ranged>();
gs.ecs.register::<InBackpack>();
gs.ecs.register::<WantsToPickupItem>();
gs.ecs.register::<WantsToUseItem>();
gs.ecs.register::<WantsToDropItem>();
gs.ecs.register::<Confusion>();
gs.ecs.register::<SimpleMarker<SerializeMe>>();
gs.ecs.register::<SerializationHelper>();
gs.ecs.register::<Equippable>();
gs.ecs.register::<Equipped>();
gs.ecs.register::<MeleePowerBonus>();
gs.ecs.register::<DefenseBonus>();
gs.ecs.register::<WantsToRemoveItem>();
gs.ecs.register::<ParticleLifetime>();
gs.ecs.register::<HungerClock>();
gs.ecs.register::<ProvidesFood>();
gs.ecs.register::<MagicMapper>();
gs.ecs.register::<Hidden>();
gs.ecs.register::<EntryTrigger>();
gs.ecs.register::<EntityMoved>();
gs.ecs.register::<SingleActivation>();
gs.ecs.register::<BlocksVisibility>();
gs.ecs.register::<Door>();
gs.ecs.insert(SimpleMarkerAllocator::<SerializeMe>::new());
gs.ecs.insert(Map::new(1));
gs.ecs.insert(Point::new(0, 0));
gs.ecs.insert(rltk::RandomNumberGenerator::new());
let player_entity = spawner::player(&mut gs.ecs, 0, 0);
gs.ecs.insert(player_entity);
gs.ecs.insert(RunState::MapGeneration{} );
gs.ecs.insert(gamelog::GameLog{ entries : vec!["Welcome to Rusty Roguelike".to_string()] });
gs.ecs.insert(particle_system::ParticleBuilder::new());
gs.ecs.insert(rex_assets::RexAssets::new());
gs.generate_world_map(1);
rltk::main_loop(context, gs);
}
| 38.691511 | 156 | 0.552012 |
38687c7900268eaf889979a4f49c0a3669d6b03a | 950 | use super::{identification, producer::Control, protocol, scope::AnonymousScope, Context};
use crate::stat::Alias;
use crate::test::samples;
use clibri::server;
use std::str::FromStr;
use uuid::Uuid;
type BroadcastStructA = (Vec<Uuid>, protocol::StructA);
type BroadcastStructB = (Vec<Uuid>, protocol::StructB);
#[allow(unused_variables)]
pub async fn emit<E: server::Error, C: server::Control<E>>(
event: protocol::Events::EventA,
scope: &mut AnonymousScope<'_, E, C>,
) -> Result<(BroadcastStructA, BroadcastStructB), String> {
let uuid = match Uuid::from_str(&event.uuid) {
Ok(uuid) => uuid,
Err(err) => {
return Err(format!("Fail to parse uuid {}: {:?}", event.uuid, err));
}
};
scope.context.inc_stat(uuid, Alias::StructA);
scope.context.inc_stat(uuid, Alias::StructB);
Ok((
(vec![uuid], samples::struct_a::get()),
(vec![uuid], samples::struct_b::get()),
))
}
| 32.758621 | 89 | 0.635789 |
ac383dfb4f4993c83f5f985e214a8f4cb3aa4258 | 411 | pub mod activity;
pub mod alert;
pub mod app;
pub mod completion;
pub mod editor;
pub mod explorer;
pub mod find;
pub mod hover;
pub mod keymap;
mod logging;
pub mod palette;
pub mod panel;
pub mod picker;
pub mod plugin;
pub mod problem;
pub mod scroll;
pub mod search;
pub mod settings;
pub mod source_control;
pub mod split;
pub mod status;
mod svg;
mod tab;
pub mod terminal;
pub mod title;
pub mod window;
| 15.222222 | 23 | 0.754258 |
ebe3fd86bd134d594f5ef6f9211f7d0361234ac0 | 2,138 | use num_derive::{FromPrimitive, ToPrimitive};
use solana_sdk::decode_error::DecodeError;
use snafu::Snafu;
/// Reasons the evm execution can fail.
#[derive(Debug, Clone, PartialEq, FromPrimitive, ToPrimitive, Snafu)]
pub enum EvmError {
#[snafu(display("Cross-Program EVM execution disabled."))]
CrossExecutionNotEnabled,
#[snafu(display("InvokeContext didn't provide evm executor."))]
EvmExecutorNotFound,
#[snafu(display("Recursive cross-program EVM execution disabled."))]
RecursiveCrossExecution,
#[snafu(display("Internal executor error."))]
InternalExecutorError,
#[snafu(display("Internal transaction error."))]
InternalTransactionError,
#[snafu(display("Instruction expect additional account as argument."))]
MissingAccount,
#[snafu(display("Instruction expect some account to be a signer."))]
MissingRequiredSignature,
#[snafu(display("Authorized transaction EVM address should be calculated from sender address using evm_address_for_program."))]
AuthorizedTransactionIncorrectAddress,
#[snafu(display("Wrong AuthorizedTx account owner.."))]
AuthorizedTransactionIncorrectOwner,
#[snafu(display("Cannot free ownership of an account that EVM didn't own."))]
FreeNotEvmAccount,
#[snafu(display("Cannot process swap, sender has no enoght tokens."))]
SwapInsufficient,
#[snafu(display("Internal Error: Cannot borrow some of account."))]
BorrowingFailed,
#[snafu(display("Failed to allocate space in storage account."))]
AllocateStorageFailed,
#[snafu(display("Failed to write data into storage account."))]
WriteStorageFailed,
#[snafu(display("Failed to deserialize data from account."))]
DeserializationError,
#[snafu(display("EVM Transaction was reverted."))]
RevertTransaction,
#[snafu(display("This instruction is not supported yet."))]
InstructionNotSupportedYet,
#[snafu(display("This instruction cause overflow in fee refund calculation."))]
OverflowInRefund,
}
impl<E> DecodeError<E> for EvmError {
fn type_of() -> &'static str {
"EvmError"
}
}
| 30.985507 | 131 | 0.718896 |
5d96357610c9de4567981909e67b18fe7942884d | 11,966 | use rand::Rng;
use rust_hdl::core::prelude::*;
use rust_hdl::hls::prelude::*;
use rust_hdl::widgets::prelude::*;
#[derive(LogicBlock, Default)]
struct ControllerTest {
to_cpu: FIFOReadController<Bits<16>>,
from_cpu: FIFOWriteController<Bits<16>>,
to_cpu_fifo: SyncFIFO<Bits<16>, 6, 7, 1>,
from_cpu_fifo: SyncFIFO<Bits<16>, 6, 7, 1>,
controller: BaseController<2>,
bridge: Bridge<16, 2, 2>,
port: MOSIPort<16>,
iport: MISOPort<16>,
clock: Signal<In, Clock>,
}
impl Logic for ControllerTest {
#[hdl_gen]
fn update(&mut self) {
// Connect the clocks
self.to_cpu_fifo.clock.next = self.clock.val();
self.from_cpu_fifo.clock.next = self.clock.val();
// Connect the test interfaces
self.from_cpu.join(&mut self.from_cpu_fifo.bus_write);
self.from_cpu_fifo
.bus_read
.join(&mut self.controller.from_cpu);
self.to_cpu.join(&mut self.to_cpu_fifo.bus_read);
self.to_cpu_fifo.bus_write.join(&mut self.controller.to_cpu);
self.controller.clock.next = self.clock.val();
// Connect the controller to the bridge
self.controller.bus.join(&mut self.bridge.upstream);
// Connect the MOSI port to node 0 of the bridge
self.bridge.nodes[0].join(&mut self.port.bus);
self.bridge.nodes[1].join(&mut self.iport.bus);
self.port.ready.next = true;
}
}
#[cfg(test)]
fn make_controller_test() -> ControllerTest {
let mut uut = ControllerTest::default();
uut.clock.connect();
uut.from_cpu.data.connect();
uut.from_cpu.write.connect();
uut.to_cpu.read.connect();
uut.iport.port_in.connect();
uut.iport.ready_in.connect();
uut.connect_all();
uut
}
#[test]
fn test_controller_test_synthesizes() {
let uut = make_controller_test();
let vlog = generate_verilog(&uut);
yosys_validate("controller", &vlog).unwrap();
}
#[test]
fn test_ping_works() {
let uut = make_controller_test();
let mut sim = Simulation::new();
sim.add_clock(5, |x: &mut Box<ControllerTest>| {
x.clock.next = !x.clock.val()
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
// Send a PING command
wait_clock_true!(sim, clock, x);
for iter in 0..10 {
wait_clock_cycles!(sim, clock, x, 5);
// A ping is 0x01XX, where XX is the code returned by the controller
x.from_cpu.data.next = (0x0167_u16 + iter).into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
wait_clock_cycles!(sim, clock, x, 5);
// Insert a NOOP
x.from_cpu.data.next = 0_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
wait_clock_cycles!(sim, clock, x, 5);
}
sim.done(x)
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
wait_clock_true!(sim, clock, x);
for iter in 0..10 {
x = sim.watch(|x| !x.to_cpu.empty.val(), x)?;
sim_assert!(sim, x.to_cpu.data.val() == (0x0167_u16 + iter), x);
x.to_cpu.read.next = true;
wait_clock_cycle!(sim, clock, x);
x.to_cpu.read.next = false;
}
sim.done(x)
});
sim.run_traced(
Box::new(uut),
5000,
std::fs::File::create(vcd_path!("controller_ping.vcd")).unwrap(),
)
.unwrap();
}
#[test]
fn test_write_command_works() {
let uut = make_controller_test();
let mut sim = Simulation::new();
sim.add_clock(5, |x: &mut Box<ControllerTest>| {
x.clock.next = !x.clock.val()
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
// Send a PING command
wait_clock_true!(sim, clock, x);
for iter in 0..10 {
wait_clock_cycles!(sim, clock, x, 5);
// A write command looks like 0x03XXYYYY, where XX is the address, YYYY is the count
// followed by count data elements.
// Write the command
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = 0x0300_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// Then the count
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = (iter + 1).into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// Then the data elements
for ndx in 0..(iter + 1) {
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = (0x7870_u16 + ndx).into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
}
// Insert a NOOPd
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = 0_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
wait_clock_cycles!(sim, clock, x, 5);
}
sim.done(x)
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
wait_clock_true!(sim, clock, x);
for iter in 0..10 {
for ndx in 0..(iter + 1) {
x = sim.watch(|x| x.port.strobe_out.val(), x)?;
sim_assert!(sim, x.port.port_out.val() == (0x7870_u32 + ndx), x);
wait_clock_cycle!(sim, clock, x);
}
}
sim.done(x)
});
sim.run_traced(
Box::new(uut),
5000,
std::fs::File::create(vcd_path!("controller_write.vcd")).unwrap(),
)
.unwrap();
}
#[test]
fn test_read_command_works() {
let uut = make_controller_test();
let mut sim = Simulation::new();
sim.add_clock(5, |x: &mut Box<ControllerTest>| {
x.clock.next = !x.clock.val()
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
// Send a PING command
wait_clock_true!(sim, clock, x);
for iter in 0..10 {
wait_clock_cycles!(sim, clock, x, 5);
// A read command looks like 0x02XXYYYY, where XX is the address, YYYY is the count
// Write the command
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = 0x0201_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// Then the count
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = (iter + 1).into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// Then wait for the data elements to come back to the CPU
for ndx in 0..(iter + 1) {
x = sim.watch(|x| !x.to_cpu.empty.val(), x)?;
sim_assert!(sim, x.to_cpu.data.val() == 0xBEE0_u16 + ndx, x);
x.to_cpu.read.next = true;
wait_clock_cycle!(sim, clock, x);
x.to_cpu.read.next = false;
}
// Wait 1 clock cycle, and then issue a POLL command
wait_clock_cycle!(sim, clock, x);
x.from_cpu.data.next = 0x0401_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// Read the result of the poll back
x = sim.watch(|x| !x.to_cpu.empty.val(), x)?;
// Port should always be ready
sim_assert!(sim, x.to_cpu.data.val() == 0xFF01_u16, x);
x.to_cpu.read.next = true;
wait_clock_cycle!(sim, clock, x);
x.to_cpu.read.next = false;
wait_clock_cycles!(sim, clock, x, 5);
}
sim.done(x)
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
wait_clock_true!(sim, clock, x);
for iter in 0..10 {
wait_clock_cycles!(sim, clock, x, 10);
for ndx in 0..(iter + 1) {
x.iport.port_in.next = (0xBEE0_u16 + ndx).into();
x.iport.ready_in.next = true;
x = sim.watch(|x| x.iport.strobe_out.val(), x)?;
wait_clock_cycle!(sim, clock, x);
}
}
sim.done(x)
});
sim.run_traced(
Box::new(uut),
20000,
std::fs::File::create(vcd_path!("controller_read.vcd")).unwrap(),
)
.unwrap();
}
#[test]
fn test_stream_command_works() {
let uut = make_controller_test();
let mut sim = Simulation::new();
sim.add_clock(5, |x: &mut Box<ControllerTest>| {
x.clock.next = !x.clock.val()
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
// Send a PING command
wait_clock_true!(sim, clock, x);
wait_clock_cycles!(sim, clock, x, 5);
// A stream command looks like 0x05XX, where XX is the address to stream from
// Write the command
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = 0x0501_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// Wait until we have collected 100 items
for iter in 0..100 {
x = sim.watch(|x| !x.to_cpu.empty.val(), x)?;
sim_assert!(sim, x.to_cpu.data.val() == 0xBAB0_u16 + iter, x);
x.to_cpu.read.next = true;
wait_clock_cycle!(sim, clock, x);
x.to_cpu.read.next = false;
}
// Send a stop command (anything non-zero)
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = 0x0501_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// There may be extra data that comes, so discard data until the
// CPU fifo is empty...
while !x.to_cpu.empty.val() {
x.to_cpu.read.next = true;
wait_clock_cycle!(sim, clock, x);
x.to_cpu.read.next = false;
}
// Send a ping
x = sim.watch(|x| !x.from_cpu.full.val(), x)?;
x.from_cpu.data.next = 0x01FF_u16.into();
x.from_cpu.write.next = true;
wait_clock_cycle!(sim, clock, x);
x.from_cpu.write.next = false;
// Wait for it to return
x = sim.watch(|x| !x.to_cpu.empty.val(), x)?;
sim_assert!(sim, x.to_cpu.data.val() == 0x01FF_u16, x);
wait_clock_cycles!(sim, clock, x, 10);
sim.done(x)
});
sim.add_testbench(move |mut sim: Sim<ControllerTest>| {
let mut x = sim.init()?;
wait_clock_true!(sim, clock, x);
for ndx in 0..100 {
x.iport.port_in.next = (0xBAB0_u16 + ndx).into();
x.iport.ready_in.next = true;
x = sim.watch(|x| x.iport.strobe_out.val(), x)?;
wait_clock_cycle!(sim, clock, x);
x.iport.ready_in.next = false;
if rand::thread_rng().gen::<f64>() < 0.3 {
for _ in 0..(rand::thread_rng().gen::<u8>() % 40) {
wait_clock_cycle!(sim, clock, x);
}
}
}
sim.done(x)
});
sim.run_traced(
Box::new(uut),
50000,
std::fs::File::create(vcd_path!("controller_stream.vcd")).unwrap(),
)
.unwrap();
}
| 36.932099 | 96 | 0.544209 |
1a397e3540733e6be43879128d695f2775e83620 | 3,198 | //
// Copyright 2021 The Project Oak Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
//! gRPC server for Oak Functions.
use crate::{
attestation::AttestationServer,
logger::Logger,
lookup::LookupData,
proto::streaming_session_server::StreamingSessionServer,
server::{apply_policy, BoxedExtensionFactory, WasmHandler},
};
use anyhow::Context;
use log::Level;
use oak_functions_abi::proto::{ConfigurationInfo, Request, ServerPolicy};
use prost::Message;
use std::{future::Future, net::SocketAddr, sync::Arc};
async fn handle_request(
wasm_handler: WasmHandler,
policy: ServerPolicy,
request: Request,
) -> anyhow::Result<Vec<u8>> {
let function = async move || wasm_handler.clone().handle_invoke(request).await;
let policy = policy.clone();
let response = apply_policy(policy, function)
.await
.context("internal error")?;
let mut bytes = vec![];
response
.encode(&mut bytes)
.context("couldn't encode response")?;
Ok(bytes)
}
/// Creates a [`WasmHandler`] with the given Wasm module, lookup data, metrics aggregator, and
/// extensions.
pub fn create_wasm_handler(
wasm_module_bytes: &[u8],
lookup_data: Arc<LookupData>,
extensions: Vec<BoxedExtensionFactory>,
logger: Logger,
) -> anyhow::Result<WasmHandler> {
let wasm_handler = WasmHandler::create(wasm_module_bytes, lookup_data, extensions, logger)?;
Ok(wasm_handler)
}
/// Starts a gRPC server on the given address, serving the `main` function from the given
/// [`WasmHandler`].
#[allow(clippy::too_many_arguments)]
pub async fn create_and_start_grpc_server<F: Future<Output = ()>>(
address: &SocketAddr,
wasm_handler: WasmHandler,
tee_certificate: Vec<u8>,
policy: ServerPolicy,
config_info: ConfigurationInfo,
terminate: F,
logger: Logger,
) -> anyhow::Result<()> {
logger.log_public(
Level::Info,
&format!(
"{:?}: Starting gRPC server on {:?}",
std::thread::current().id(),
&address
),
);
let request_handler =
async move |request| handle_request(wasm_handler, policy.clone(), request).await;
// A `Service` is needed for every connection. Here we create a service using the
// `wasm_handler`.
tonic::transport::Server::builder()
.add_service(StreamingSessionServer::new(
AttestationServer::create(tee_certificate, request_handler, config_info, logger)
.context("Couldn't create remote attestation server")?,
))
.serve_with_shutdown(*address, terminate)
.await
.context("Couldn't start server")?;
Ok(())
}
| 32.30303 | 96 | 0.678549 |
fed4829e76a2c6f014b06fb8078419d1d527e89b | 2,535 | use rand::prelude::*;
use crate::math::Point;
/// Perlin noise generator.
pub struct Perlin {
rand_f: [f64; Self::POINT_COUNT],
perm_x: [usize; Self::POINT_COUNT],
perm_y: [usize; Self::POINT_COUNT],
perm_z: [usize; Self::POINT_COUNT],
}
impl Perlin {
const POINT_COUNT: usize = 256;
pub fn new() -> Self {
let rng = &mut rand::thread_rng();
let rand_f = {
let mut rand_f = [0.0; Self::POINT_COUNT];
for f in rand_f.iter_mut() {
*f = rng.gen();
}
rand_f
};
let perm_x = Self::gen_perm(rng);
let perm_y = Self::gen_perm(rng);
let perm_z = Self::gen_perm(rng);
Self {
rand_f,
perm_x,
perm_y,
perm_z,
}
}
pub fn noise(&self, p: &Point) -> f64 {
let u = p.x - p.x.floor();
let v = p.y - p.y.floor();
let w = p.z - p.z.floor();
let i = p.x.floor() as usize;
let j = p.y.floor() as usize;
let k = p.z.floor() as usize;
let mut c = [[[0.0; 2]; 2]; 2];
#[allow(clippy::needless_range_loop)]
for di in 0..2 {
for dj in 0..2 {
for dk in 0..2 {
c[di][dj][dk] = self.rand_f[
self.perm_x[(i + di) & 255] ^
self.perm_y[(j + dj) & 255] ^
self.perm_z[(k + dk) & 255]
];
}
}
}
Self::interpolate(c, u, v, w)
}
fn gen_perm(rng: &mut ThreadRng) -> [usize; Self::POINT_COUNT] {
let mut p = [0; Self::POINT_COUNT];
for (i, n) in p.iter_mut().enumerate() {
*n = i;
}
for i in (1..p.len()).rev() {
let target = rng.gen_range(0..i);
p.swap(i, target);
}
p
}
fn interpolate(c: [[[f64; 2]; 2]; 2], u: f64, v: f64, w: f64) -> f64 {
let mut acc = 0.0;
#[allow(clippy::needless_range_loop)]
for i in 0..2 {
for j in 0..2 {
for k in 0..2 {
acc += (i as f64).mul_add(u, (1.0 - i as f64) * (1.0 - u)) *
(j as f64).mul_add(v, (1.0 - j as f64) * (1.0 - v)) *
(k as f64).mul_add(w, (1.0 - k as f64) * (1.0 - w)) * c[i][j][k];
}
}
}
acc
}
}
impl Default for Perlin {
fn default() -> Self {
Self::new()
}
}
| 24.852941 | 89 | 0.413018 |
8f156bd8d4b4fef589ea307a817946182b6468f9 | 2,101 | #[doc = "Register `FsinEXTCFG` reader"]
pub struct R(crate::R<FSINEXTCFG_SPEC>);
impl core::ops::Deref for R {
type Target = crate::R<FSINEXTCFG_SPEC>;
#[inline(always)]
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl From<crate::R<FSINEXTCFG_SPEC>> for R {
#[inline(always)]
fn from(reader: crate::R<FSINEXTCFG_SPEC>) -> Self {
R(reader)
}
}
#[doc = "Register `FsinEXTCFG` writer"]
pub struct W(crate::W<FSINEXTCFG_SPEC>);
impl core::ops::Deref for W {
type Target = crate::W<FSINEXTCFG_SPEC>;
#[inline(always)]
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl core::ops::DerefMut for W {
#[inline(always)]
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl From<crate::W<FSINEXTCFG_SPEC>> for W {
#[inline(always)]
fn from(writer: crate::W<FSINEXTCFG_SPEC>) -> Self {
W(writer)
}
}
impl W {
#[doc = "Writes raw bits to the register."]
#[inline(always)]
pub unsafe fn bits(&mut self, bits: u32) -> &mut Self {
self.0.bits(bits);
self
}
}
#[doc = "ASRC Input Sample Pulse Extend Configuration Register\n\nThis register you can [`read`](crate::generic::Reg::read), [`write_with_zero`](crate::generic::Reg::write_with_zero), [`reset`](crate::generic::Reg::reset), [`write`](crate::generic::Reg::write), [`modify`](crate::generic::Reg::modify). See [API](https://docs.rs/svd2rust/#read--modify--write-api).\n\nFor information about available fields see [fsin_extcfg](index.html) module"]
pub struct FSINEXTCFG_SPEC;
impl crate::RegisterSpec for FSINEXTCFG_SPEC {
type Ux = u32;
}
#[doc = "`read()` method returns [fsin_extcfg::R](R) reader structure"]
impl crate::Readable for FSINEXTCFG_SPEC {
type Reader = R;
}
#[doc = "`write(|w| ..)` method takes [fsin_extcfg::W](W) writer structure"]
impl crate::Writable for FSINEXTCFG_SPEC {
type Writer = W;
}
#[doc = "`reset()` method sets FsinEXTCFG to value 0"]
impl crate::Resettable for FSINEXTCFG_SPEC {
#[inline(always)]
fn reset_value() -> Self::Ux {
0
}
}
| 32.323077 | 445 | 0.63446 |
f7e3fb1ab02362068baa0c69d530e87a99ecaff4 | 4,487 | #[doc = r" Value read from the register"]
pub struct R {
bits: u32,
}
#[doc = r" Value to write to the register"]
pub struct W {
bits: u32,
}
impl super::IFLAG2 {
#[doc = r" Modifies the contents of the register"]
#[inline]
pub fn modify<F>(&self, f: F)
where
for<'w> F: FnOnce(&R, &'w mut W) -> &'w mut W,
{
let bits = self.register.get();
let r = R { bits: bits };
let mut w = W { bits: bits };
f(&r, &mut w);
self.register.set(w.bits);
}
#[doc = r" Reads the contents of the register"]
#[inline]
pub fn read(&self) -> R {
R {
bits: self.register.get(),
}
}
#[doc = r" Writes to the register"]
#[inline]
pub fn write<F>(&self, f: F)
where
F: FnOnce(&mut W) -> &mut W,
{
let mut w = W::reset_value();
f(&mut w);
self.register.set(w.bits);
}
#[doc = r" Writes the reset value to the register"]
#[inline]
pub fn reset(&self) {
self.write(|w| w)
}
}
#[doc = "Possible values of the field `BUFHI`"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum BUFHIR {
#[doc = "The corresponding buffer has no occurrence of successfully completed transmission or reception."]
_0,
#[doc = "The corresponding buffer has successfully completed transmission or reception."]
_1,
#[doc = r" Reserved"]
_Reserved(u32),
}
impl BUFHIR {
#[doc = r" Value of the field as raw bits"]
#[inline]
pub fn bits(&self) -> u32 {
match *self {
BUFHIR::_0 => 0,
BUFHIR::_1 => 1,
BUFHIR::_Reserved(bits) => bits,
}
}
#[allow(missing_docs)]
#[doc(hidden)]
#[inline]
pub fn _from(value: u32) -> BUFHIR {
match value {
0 => BUFHIR::_0,
1 => BUFHIR::_1,
i => BUFHIR::_Reserved(i),
}
}
#[doc = "Checks if the value of the field is `_0`"]
#[inline]
pub fn is_0(&self) -> bool {
*self == BUFHIR::_0
}
#[doc = "Checks if the value of the field is `_1`"]
#[inline]
pub fn is_1(&self) -> bool {
*self == BUFHIR::_1
}
}
#[doc = "Values that can be written to the field `BUFHI`"]
pub enum BUFHIW {
#[doc = "The corresponding buffer has no occurrence of successfully completed transmission or reception."]
_0,
#[doc = "The corresponding buffer has successfully completed transmission or reception."]
_1,
}
impl BUFHIW {
#[allow(missing_docs)]
#[doc(hidden)]
#[inline]
pub fn _bits(&self) -> u32 {
match *self {
BUFHIW::_0 => 0,
BUFHIW::_1 => 1,
}
}
}
#[doc = r" Proxy"]
pub struct _BUFHIW<'a> {
w: &'a mut W,
}
impl<'a> _BUFHIW<'a> {
#[doc = r" Writes `variant` to the field"]
#[inline]
pub fn variant(self, variant: BUFHIW) -> &'a mut W {
unsafe { self.bits(variant._bits()) }
}
#[doc = "The corresponding buffer has no occurrence of successfully completed transmission or reception."]
#[inline]
pub fn _0(self) -> &'a mut W {
self.variant(BUFHIW::_0)
}
#[doc = "The corresponding buffer has successfully completed transmission or reception."]
#[inline]
pub fn _1(self) -> &'a mut W {
self.variant(BUFHIW::_1)
}
#[doc = r" Writes raw bits to the field"]
#[inline]
pub unsafe fn bits(self, value: u32) -> &'a mut W {
const MASK: u32 = 4294967295;
const OFFSET: u8 = 0;
self.w.bits &= !((MASK as u32) << OFFSET);
self.w.bits |= ((value & MASK) as u32) << OFFSET;
self.w
}
}
impl R {
#[doc = r" Value of the register as raw bits"]
#[inline]
pub fn bits(&self) -> u32 {
self.bits
}
#[doc = "Bits 0:31 - Buffer MBi Interrupt"]
#[inline]
pub fn bufhi(&self) -> BUFHIR {
BUFHIR::_from({
const MASK: u32 = 4294967295;
const OFFSET: u8 = 0;
((self.bits >> OFFSET) & MASK as u32) as u32
})
}
}
impl W {
#[doc = r" Reset value of the register"]
#[inline]
pub fn reset_value() -> W {
W { bits: 0 }
}
#[doc = r" Writes raw bits to the register"]
#[inline]
pub unsafe fn bits(&mut self, bits: u32) -> &mut Self {
self.bits = bits;
self
}
#[doc = "Bits 0:31 - Buffer MBi Interrupt"]
#[inline]
pub fn bufhi(&mut self) -> _BUFHIW {
_BUFHIW { w: self }
}
}
| 26.708333 | 110 | 0.530198 |
896cd301b9b6f5955f0e1e4e5e3e280995d8c0f1 | 12,471 | //! A typed high-level pipeline interface.
use std::borrow::Borrow;
use std::mem;
use std::marker::PhantomData;
use {hal, format, handle};
use hal::image::Layout;
use hal::pass::{AttachmentOps, AttachmentLoadOp, AttachmentStoreOp};
use {Backend, Device, Primitive, Supports, Transfer, Graphics, Encoder};
pub use hal::pso::{
ColorValue, DepthValue, StencilValue, Rect, Viewport,
DescriptorBinding, DescriptorArrayIndex, Rasterizer, CreationError, InstanceRate,
};
#[derive(Debug)]
pub struct RawDescriptorSet<B: Backend> {
pub(crate) resource: B::DescriptorSet,
pub(crate) pool: handle::raw::DescriptorPool<B>,
}
impl<B: Backend> RawDescriptorSet<B> {
pub fn resource(&self) -> &B::DescriptorSet { &self.resource }
}
pub trait Descriptors<B: Backend>: Sized {
type Data: Sized;
fn from_raw(handle::raw::DescriptorSetLayout<B>, RawDescriptorSet<B>) -> (Self, Self::Data);
fn layout_bindings() -> Vec<hal::pso::DescriptorSetLayoutBinding>;
fn layout(&self) -> &B::DescriptorSetLayout;
fn set(&self) -> &B::DescriptorSet;
}
pub trait BindDesc {
const TYPE: hal::pso::DescriptorType;
const COUNT: usize;
}
pub trait Bind<B: Backend>: BindDesc {
type Handle: 'static + Clone;
fn write<'a, I>(views: I) -> Vec<hal::pso::Descriptor<'a, B>>
where
I: IntoIterator,
I::Item: Borrow<&'a Self::Handle>;
fn require<'a>(
&'a Self::Handle,
&mut Vec<(&'a handle::raw::Buffer<B>, hal::buffer::State)>,
&mut Vec<(&'a handle::raw::Image<B>, hal::image::Subresource, hal::image::State)>,
&mut handle::Bag<B>,
);
}
macro_rules! define_descriptors {
([$( $array_len:expr ),*] $( $name:ident, )*) => {
$(
impl<T: BindDesc> BindDesc for [T; $array_len] {
const TYPE: hal::pso::DescriptorType = T::TYPE;
const COUNT: usize = $array_len * T::COUNT;
}
impl<B, T> Bind<B> for [T; $array_len]
where B: Backend, T: Bind<B>
{
type Handle = T::Handle;
fn write<'a, I>(handles: I) -> Vec<hal::pso::Descriptor<'a, B>>
where
I: IntoIterator,
I::Item: Borrow<&'a Self::Handle>
{
T::write(handles)
}
fn require<'a>(
handle: &'a Self::Handle,
buffers: &mut Vec<(&'a handle::raw::Buffer<B>, hal::buffer::State)>,
images: &mut Vec<(&'a handle::raw::Image<B>, hal::image::Subresource, hal::image::State)>,
others: &mut handle::Bag<B>
) {
T::require(handle, buffers, images, others)
}
}
)*
$(
pub struct $name;
impl BindDesc for $name {
const TYPE: hal::pso::DescriptorType = hal::pso::DescriptorType::$name;
const COUNT: usize = 1;
}
)*
}
}
// TODO: type-safe formats
define_descriptors! {
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ]
SampledImage,
Sampler,
}
impl<B: Backend> Bind<B> for SampledImage {
type Handle = handle::raw::ImageView<B>;
fn write<'a, I>(_views: I) -> Vec<hal::pso::Descriptor<'a, B>>
where
I: IntoIterator,
I::Item: Borrow<&'a Self::Handle>,
{
Vec::new()
/* views
.into_iter()
.map(|view| {
let layout = Layout::ShaderReadOnlyOptimal;
(view.borrow().resource(), layout)
}).collect())*/
}
fn require<'a>(
view: &'a Self::Handle,
_: &mut Vec<(&'a handle::raw::Buffer<B>, hal::buffer::State)>,
images: &mut Vec<(&'a handle::raw::Image<B>, hal::image::Subresource, hal::image::State)>,
_: &mut handle::Bag<B>,
) {
let img = view.info();
let levels = img.info().mip_levels;
let layers = img.info().kind.num_layers();
let state = (hal::image::Access::SHADER_READ, Layout::ShaderReadOnlyOptimal);
for level in 0..levels {
for layer in 0..layers {
let subresource = hal::image::Subresource {
aspects: img.info().aspects,
level,
layer
};
images.push((img, subresource, state));
}
}
}
}
impl<B: Backend> Bind<B> for Sampler {
type Handle = handle::raw::Sampler<B>;
fn write<'a, I>(_samplers: I) -> Vec<hal::pso::Descriptor<'a, B>>
where
I: IntoIterator,
I::Item: Borrow<&'a Self::Handle>,
{
Vec::new()
/*
samplers
.into_iter()
.map(|sampler| sampler.borrow().resource())
.collect())*/
}
fn require<'a>(
sampler: &'a Self::Handle,
_: &mut Vec<(&'a handle::raw::Buffer<B>, hal::buffer::State)>,
_: &mut Vec<(&'a handle::raw::Image<B>, hal::image::Subresource, hal::image::State)>,
others: &mut handle::Bag<B>,
) {
others.add(sampler.clone());
}
}
pub struct DescriptorSetBindRef<'a, 'b, B: Backend, T: Bind<B>> {
pub set: &'a B::DescriptorSet,
pub binding: DescriptorBinding,
pub handles: &'b mut [Option<T::Handle>],
}
pub struct DescriptorSetsUpdate<'a, B: Backend> {
device: &'a mut Device<B>,
writes: Vec<hal::pso::DescriptorSetWrite<'a, B, Vec<hal::pso::Descriptor<'a, B>>>>,
}
impl<'a, B: Backend> DescriptorSetsUpdate<'a, B> {
pub(crate) fn new(device: &'a mut Device<B>) -> Self {
DescriptorSetsUpdate { device, writes: Vec::new() }
}
pub fn write<'b, T: Bind<B>, I>(
mut self,
bind_ref: DescriptorSetBindRef<'a, 'b, B, T>,
array_offset: usize,
handles: I,
) -> Self
where
I: IntoIterator,
I::Item: Borrow<&'a T::Handle>,
{
let handles: Vec<_> = handles.into_iter().map(|handle| *handle.borrow()).collect();
for (slot, &handle) in bind_ref.handles[array_offset..].iter_mut().zip(handles.iter()) {
*slot = Some(handle.clone());
}
self.writes.push(hal::pso::DescriptorSetWrite {
set: bind_ref.set,
binding: bind_ref.binding,
array_offset: 0,
descriptors: T::write(handles),
});
self
}
pub fn finish(self) {
use hal::Device;
self.device.raw.write_descriptor_sets(self.writes);
}
}
pub trait GraphicsPipelineInit<B: Backend> {
type Pipeline;
fn create<'a>(
self,
&mut Device<B>,
hal::pso::GraphicsShaderSet<'a, B>,
Primitive,
Rasterizer
) -> Result<Self::Pipeline, CreationError>;
}
pub trait GraphicsPipelineMeta<B: Backend> {
fn layout(&self) -> &B::PipelineLayout;
fn render_pass(&self) -> &B::RenderPass;
}
pub trait GraphicsPipelineData<B: Backend> {
type Pipeline;
fn begin_renderpass<'a, 'b, C>(
self,
encoder: &'a mut Encoder<'b, B, C>,
pipeline: &'a Self::Pipeline
) -> hal::command::RenderPassInlineEncoder<'a, B, hal::command::Primary>
where Self: 'a, 'b: 'a, C: Supports<Transfer> + Supports<Graphics>;
}
pub trait Component<'a, B: Backend> {
type Init: 'a;
type Data: 'a;
fn descriptor_layout<'b>(&'b Self::Init) -> Option<&'b B::DescriptorSetLayout>
where 'a: 'b
{
None
}
fn attachment(&Self::Init) -> Option<Attachment> {
None
}
fn append_desc(
Self::Init,
&mut hal::pso::GraphicsPipelineDesc<B>,
) {}
fn require<'b>(
&'b Self::Data,
&mut Vec<(&'b handle::raw::Buffer<B>, hal::buffer::State)>,
&mut Vec<(&'b handle::raw::Image<B>, hal::image::Subresource, hal::image::State)>,
&mut handle::Bag<B>,
) where 'a: 'b {}
fn vertex_buffer<'b>(&'b Self::Data) -> Option<(&'b B::Buffer, hal::buffer::Offset)>
where 'a: 'b
{
None
}
fn descriptor_set<'b>(&'b Self::Data) -> Option<&'b B::DescriptorSet>
where 'a: 'b
{
None
}
}
pub struct Attachment {
pub format: format::Format,
pub ops: AttachmentOps,
pub stencil_ops: AttachmentOps,
pub required_layout: Layout,
}
pub struct RenderTarget<F: format::AsFormat>(PhantomData<F>);
impl<'a, B, F> Component<'a, B> for RenderTarget<F>
where
B: Backend,
F: 'a + format::AsFormat,
{
type Init = hal::pso::ColorBlendDesc;
type Data = &'a handle::ImageView<B, F>;
fn attachment(_: &Self::Init) -> Option<Attachment> {
Some(Attachment {
format: F::SELF,
// TODO: AttachmentLoadOp::Clear
ops: AttachmentOps::new(AttachmentLoadOp::Load, AttachmentStoreOp::Store),
stencil_ops: AttachmentOps::DONT_CARE,
required_layout: Layout::ColorAttachmentOptimal,
})
}
fn append_desc(
init: Self::Init,
pipeline_desc: &mut hal::pso::GraphicsPipelineDesc<B>,
) {
pipeline_desc.blender.targets.push(init);
}
fn require<'b>(
data: &'b Self::Data,
_: &mut Vec<(&'b handle::raw::Buffer<B>, hal::buffer::State)>,
images: &mut Vec<(&'b handle::raw::Image<B>, hal::image::Subresource, hal::image::State)>,
_: &mut handle::Bag<B>,
) where 'a: 'b {
let img = data.as_ref().info();
let levels = img.info().mip_levels;
let layers = img.info().kind.num_layers();
// TODO: READ not always necessary
let state = (hal::image::Access::COLOR_ATTACHMENT_READ | hal::image::Access::COLOR_ATTACHMENT_WRITE,
Layout::ColorAttachmentOptimal);
for level in 0..levels {
for layer in 0..layers {
let subresource = hal::image::Subresource {
aspects: img.info().aspects,
level,
layer
};
images.push((img, subresource, state));
}
}
}
}
pub trait Structure: Sized {
fn elements() -> Vec<hal::pso::Element<format::Format>>;
}
/// Helper trait to support variable instance rate.
pub trait ToInstanceRate {
/// The associated init type for PSO component.
type Init;
/// Get an actual instance rate value from the init.
fn get_rate(init: &Self::Init) -> InstanceRate;
}
/// Helper phantom type for per-vertex attributes.
pub enum NonInstanced {}
/// Helper phantom type for per-instance attributes.
pub enum Instanced {}
impl ToInstanceRate for InstanceRate {
type Init = InstanceRate;
fn get_rate(init: &Self::Init) -> InstanceRate { *init }
}
impl ToInstanceRate for Instanced {
type Init = ();
fn get_rate(_: &Self::Init) -> InstanceRate { 1 }
}
impl ToInstanceRate for NonInstanced {
type Init = ();
fn get_rate(_: &Self::Init) -> InstanceRate { 0 }
}
pub struct VertexBuffer<T: Structure, I=NonInstanced>(PhantomData<(T, I)>);
impl<'a, B, T, I> Component<'a, B> for VertexBuffer<T, I>
where B: Backend, T: 'a + Structure, I: ToInstanceRate, I::Init: 'a
{
type Init = I::Init;
type Data = &'a handle::Buffer<B, T>;
fn append_desc(
init: Self::Init,
pipeline_desc: &mut hal::pso::GraphicsPipelineDesc<B>,
) {
let binding = pipeline_desc.vertex_buffers.len() as u32;
pipeline_desc.vertex_buffers.push(hal::pso::VertexBufferDesc {
stride: mem::size_of::<T>() as u32,
rate: I::get_rate(&init),
});
let mut location = 0;
for element in T::elements() {
pipeline_desc.attributes.push(hal::pso::AttributeDesc {
location,
binding,
element,
});
location += 1;
}
}
fn require<'b>(
data: &'b Self::Data,
buffers: &mut Vec<(&'b handle::raw::Buffer<B>, hal::buffer::State)>,
_: &mut Vec<(&'b handle::raw::Image<B>, hal::image::Subresource, hal::image::State)>,
_: &mut handle::Bag<B>,
) where 'a: 'b {
buffers.push((data.as_ref(), hal::buffer::Access::VERTEX_BUFFER_READ));
}
fn vertex_buffer<'b>(data: &'b Self::Data) -> Option<(&'b B::Buffer, hal::buffer::Offset)>
where 'a: 'b
{
// TODO: offset
Some((data.as_ref().resource(), 0))
}
}
pub type InstanceBuffer<T> = VertexBuffer<T, Instanced>;
| 29.622328 | 110 | 0.553043 |
8a3bdcbd3d5ed4df4d8c4bf1b0557a09569d2c90 | 2,421 | use libc;
pub type __uint32_t = libc::c_uint;
pub type uint32_t = __uint32_t;
pub type size_t = libc::c_ulong;
// https://www.geeksforgeeks.org/move-zeroes-end-array/
pub unsafe fn push_zeroes_to_end(mut arr: *mut uint32_t,
mut n: size_t) -> size_t {
let mut count: size_t = 0i32 as size_t;
let mut i: size_t = 0i32 as size_t;
while i < n {
if *arr.offset(i as isize) != 0i32 as libc::c_uint {
let fresh0 = count;
count = count.wrapping_add(1);
*arr.offset(fresh0 as isize) = *arr.offset(i as isize)
}
i = i.wrapping_add(1)
}
let mut ret: size_t = count;
while count < n {
let fresh1 = count;
count = count.wrapping_add(1);
*arr.offset(fresh1 as isize) = 0i32 as uint32_t
}
return ret;
}
pub unsafe fn set_add(mut values: *mut uint32_t,
mut len: *mut size_t, mut cap: size_t,
mut target: uint32_t) -> bool {
if *len == cap { return 0i32 != 0 }
let mut i: uint32_t = 0i32 as uint32_t;
while (i as libc::c_ulong) < *len {
if *values.offset(i as isize) == target { return 0i32 != 0 }
i = i.wrapping_add(1)
}
let fresh2 = *len;
*len = (*len).wrapping_add(1);
*values.offset(fresh2 as isize) = target;
return 0i32 != 0;
}
/* *
* Add `target` to `values` if it doesn't exist
* "set"s should only be modified with set_* functions
* Values MUST be greater than 0
*/
/* *
* Remove `target` from `values` if it exists
* "set"s should only be modified with set_* functions
* Values MUST be greater than 0
*/
pub unsafe fn set_remove(mut values: *mut uint32_t,
mut len: *mut size_t, mut cap: size_t,
mut target: uint32_t) -> bool {
let mut i: uint32_t = 0i32 as uint32_t;
while (i as libc::c_ulong) < *len {
if *values.offset(i as isize) == target {
// Set to 0 and swap with the end element so that
// zeroes exist only after all the values.
*len = (*len).wrapping_sub(1);
let mut last_elem_pos: size_t = *len;
*values.offset(i as isize) =
*values.offset(last_elem_pos as isize);
*values.offset(last_elem_pos as isize) = 0i32 as uint32_t;
return 1i32 != 0
}
i = i.wrapping_add(1)
}
return 0i32 != 0;
}
| 35.086957 | 70 | 0.570838 |
395f56e110f052ff76b6383796269220e5f911b2 | 2,635 | use crate::flows;
use crate::flows::core::Variant;
use crate::structures::config::Command::{Alfred, Best, Fn, Preview, Query, Repo, Search, Widget};
use crate::structures::config::{AlfredCommand, Config, RepoCommand};
use anyhow::Context;
use anyhow::Error;
pub fn handle_config(config: Config) -> Result<(), Error> {
match config.cmd.as_ref() {
None => flows::core::main(Variant::Core, config, true),
Some(c) => match c {
Preview { line } => flows::preview::main(&line[..]),
Query { query } => {
let query_clone = query.clone();
flows::query::main(query.clone(), config)
.with_context(|| format!("Failed to filter cheatsheets for {}", query_clone))
}
Best { query, args } => {
let query_clone = query.clone();
flows::best::main(query.clone(), args.to_vec(), config).with_context(|| {
format!("Failed to execute snippet similar to {}", query_clone)
})
}
Search { query } => flows::search::main(query.clone(), config)
.context("Failed to search for online cheatsheets"),
Widget { shell } => {
flows::shell::main(&shell).context("Failed to print shell widget code")
}
Fn { func, args } => flows::func::main(func.clone(), args.to_vec())
.with_context(|| format!("Failed to execute function `{}`", func)),
Repo { cmd } => match cmd {
RepoCommand::Add { uri } => flows::repo::add(uri.clone(), &config.finder)
.with_context(|| format!("Failed to import cheatsheets from `{}`", uri)),
RepoCommand::Browse => flows::repo::browse(&config.finder)
.context("Failed to browse featured cheatsheets"),
},
Alfred { cmd } => {
match cmd {
AlfredCommand::Start => flows::alfred::main(config)
.context("Failed to call Alfred starting function"),
AlfredCommand::Suggestions => flows::alfred::suggestions(config, false)
.context("Failed to call Alfred suggestion function"),
AlfredCommand::Check => flows::alfred::suggestions(config, true)
.context("Failed to call Alfred check function"),
AlfredCommand::Transform => flows::alfred::transform()
.context("Failed to call Alfred transform function"),
}
}
},
}
}
| 43.916667 | 97 | 0.528273 |
ed4e36cc3dd031a3da7a2f910c976cde6b8042e5 | 2,821 | use std::os::raw::{c_float, c_int};
use skia_safe::{AlphaType, ColorType, ImageInfo, IPoint, ISize, IVector, Rect};
use crate::common::context::Context;
use crate::common::context::pixel_manipulation::image_data::ImageData;
pub mod image_data;
impl Context {
pub fn create_image_data(width: c_int, height: c_int) -> ImageData {
ImageData::new(width, height)
}
pub fn get_image_data(
&mut self,
sx: c_float,
sy: c_float,
sw: c_float,
sh: c_float,
) -> ImageData {
let info = ImageInfo::new(
ISize::new(sw as i32, sh as i32),
ColorType::RGBA8888,
AlphaType::Unpremul,
None,
);
let row_bytes = info.width() * 4;
let mut slice = vec![255u8; (row_bytes * info.height()) as usize];
let _ = self.surface.canvas().read_pixels(
&info,
slice.as_mut_slice(),
row_bytes as usize,
IPoint::new(sx as i32, sy as i32),
);
let mut image_data = ImageData::new(info.width(), info.height());
image_data.set_data(slice);
image_data
}
pub fn put_image_data(
&mut self,
data: &ImageData,
dx: c_float,
dy: c_float,
sx: c_float,
sy: c_float,
sw: c_float,
sh: c_float,
) {
let mut dx = dx;
let mut dy = dy;
let mut sx = sx;
let mut sy = sy;
let mut sw = sw;
let mut sh = sh;
let srect: Rect = Rect::from_xywh(sx, sy, sw, sh);
let info: ImageInfo;
let row_bytes: usize;
if srect.is_empty() {
info = ImageInfo::new(
ISize::new(data.width(), data.height()),
ColorType::RGBA8888,
AlphaType::Unpremul,
None,
);
row_bytes = (data.width() * 4) as usize;
} else {
if sw < 0.0 {
sx += sw;
sw = -sw;
}
if sy < 0.0 {
sy += sh;
sh = -sh;
}
if sx + sw > data.width() as f32 {
sw = data.width() as f32 - sx;
}
if sy + sh > data.height() as f32 {
sh = data.height() as f32 - sy;
}
dx += sx;
dy += sy;
info = ImageInfo::new(
ISize::new(sw as i32, sh as i32),
ColorType::RGBA8888,
AlphaType::Unpremul,
None,
);
row_bytes = (sw * 4.0) as usize;
}
let _ = self.surface.canvas().write_pixels(
&info,
&data.data(),
row_bytes,
IVector::new(dx as i32, dy as i32),
);
}
}
| 26.613208 | 79 | 0.458703 |
26ce8bd7b74c4efa7dc34da105763405419eb463 | 1,678 | // Copyright (c) SimpleStaking, Viable Systems and Tezedge Contributors
// SPDX-License-Identifier: MIT
use crate::protocol_runner::init::context::{
ProtocolRunnerInitContextErrorAction, ProtocolRunnerInitContextState,
};
use crate::protocol_runner::init::ProtocolRunnerInitState;
use crate::protocol_runner::ProtocolRunnerState;
use crate::service::ProtocolRunnerService;
use crate::{Action, ActionWithMeta, Service, Store};
use super::ProtocolRunnerInitContextPendingAction;
pub fn protocol_runner_init_context_effects<S>(store: &mut Store<S>, action: &ActionWithMeta)
where
S: Service,
{
if let Action::ProtocolRunnerInitContext(_) = &action.action {
let state = store.state.get();
let apply_genesis = match &state.protocol_runner {
ProtocolRunnerState::Init(ProtocolRunnerInitState::Context(
ProtocolRunnerInitContextState::Init { apply_genesis },
)) => *apply_genesis,
_ => return,
};
let res = store.service.protocol_runner().init_context(
state.config.protocol_runner.storage.clone(),
&state.config.protocol_runner.environment,
apply_genesis,
state.config.protocol_runner.enable_testchain,
false,
state.config.init_storage_data.patch_context.clone(),
state.config.init_storage_data.context_stats_db_path.clone(),
);
match res {
Ok(token) => store.dispatch(ProtocolRunnerInitContextPendingAction { token }),
Err(error) => {
store.dispatch(ProtocolRunnerInitContextErrorAction { token: None, error })
}
};
}
}
| 39.023256 | 93 | 0.678188 |
71ad86def4f874b069ec8a48dba8f4456491dd50 | 1,710 | #![forbid(unsafe_code)]
// #![forbid(rust_2018_idioms)]
#![allow(dead_code)]
use std::ops::Range;
/// A span is a range into a set of bytes - see it as a selection into a Git config file.
///
/// Similar to [`std::ops::RangeInclusive`], but tailor made to work for us.
/// There are various issues with std ranges, which we don't have to opt into for the simple Range-like item we need.
#[derive(PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy)]
struct Span {
pub start: usize,
pub end_inclusive: usize,
}
impl From<Span> for Range<usize> {
fn from(Span { start, end_inclusive }: Span) -> Self {
Range {
start,
end: end_inclusive + 1,
}
}
}
impl Span {
fn to_range(&self) -> Range<usize> {
self.clone().into()
}
}
pub mod file;
pub use file::File;
/// A module with specialized value types as they exist within git config files.
pub mod value;
/// Spanned items refer to their content using [`Span`]s, thus they act like a pointer into a byte buffer representing the config file.
///
/// These are inherently read-only, as they do not own any data but rather point to a buffer they don't even know.
mod spanned;
/// Owned versions of what can be found in `spanned`, which allows these items to be altered.
///
/// All of these will *may* remember their originating `span` as `Some(…)`, which is the entire region in the config file they point to. This is important
/// in case of updates. New owned items thus don't have a `span`, represented by `None`.
mod owned;
/// Borrowed items are nothing more than a fancy 'handle' to an item stored in a file, which can be made editable to make updates.
pub mod borrowed;
mod decode;
| 32.264151 | 154 | 0.681871 |
acbf73bf62a244f544190caa265074fe96f4381d | 28,443 | pub mod color_format;
pub mod error;
mod iter;
pub mod options;
mod utils;
use crate::geometry::{
color::Color,
mesh::{Face, Mesh, Vertex},
position::Position,
};
use self::{
error::{Error, Kind},
iter::OffLines,
options::Options,
utils::{ConvertVec, StrParts},
};
pub type Result<T = ()> = std::result::Result<T, Error>;
/// Parses a [`crate::geometry::mesh::Mesh`] from a `off` string.
#[derive(Debug, Clone)]
pub struct Parser<'a> {
lines: OffLines<'a>,
prev_line_index: usize,
vertex_count: usize,
face_count: usize,
edge_count: usize,
document: Mesh,
options: Options,
}
impl<'a> Parser<'a> {
/// Creates a new [`Parser`] from a `off` string.
pub fn new<S: AsRef<str>>(s: &'a S, options: Options) -> Self {
let lines = OffLines::new(s.as_ref());
Parser {
lines,
prev_line_index: 0,
vertex_count: 0,
face_count: 0,
edge_count: 0,
document: Mesh::new(),
options,
}
}
/// Parses the `off` string and returns a [`Result`] containing the [`crate::geometry::mesh::Mesh`] or an [`Error`].
///
/// # Errors
///
/// Will return `Error` if an error occurs while parsing the `off` data.
pub fn parse(mut self) -> crate::Result {
self.parse_header()?;
self.parse_counts()?;
self.parse_vertices()?;
self.parse_faces()?;
Ok(self.finalize())
}
/// Progress to the next line.
fn next_line(&mut self) -> Option<(usize, &'a str)> {
let (line_index, line) = self.lines.next()?;
self.prev_line_index = line_index;
Some((line_index, line))
}
/// Parses the header of the `off` string.
fn parse_header(&mut self) -> Result {
let (line_index, line) = self
.next_line()
.ok_or_else(|| Error::without_message(Kind::Empty, 0))?;
if line != "OFF" {
return Err(Error::with_message(
Kind::InvalidHeader,
line_index,
"First non-comment line should be `OFF`",
));
}
Ok(())
}
/// Parses the counts of the `off` string.
fn parse_counts(&mut self) -> Result {
let (line_index, line) = self.next_line().ok_or_else(|| {
Error::with_message(Kind::Missing, self.prev_line_index + 1, "No counts present")
})?;
let counts: Vec<&str> = line.split_line();
let num: Vec<usize> = counts.parse_string_to().map_err(|err| {
Error::with_message(
Kind::InvalidCounts,
line_index,
format!("Failed to parse count as number ({})", err),
)
})?;
match num[..] {
[vertex_count, face_count, edge_count] => {
self.vertex_count = vertex_count;
self.face_count = face_count;
self.edge_count = edge_count;
}
[vertex_count, face_count] => {
self.vertex_count = vertex_count;
self.face_count = face_count;
}
_ => {
return Err(Error::with_message(
Kind::InvalidCounts,
line_index,
format!(
"Invalid amount of counts present (expected: 2-3, actual: {})",
num.len()
),
));
}
}
// Check for limits
if self.vertex_count > self.options.limits.vertex_count {
return Err(Error::with_message(
Kind::LimitExceeded,
line_index,
format!(
"Vertext count exceeds limit (limit: {}, actual: {})",
self.options.limits.vertex_count, self.vertex_count
),
));
}
if self.face_count > self.options.limits.face_count {
return Err(Error::with_message(
Kind::LimitExceeded,
line_index,
format!(
"Face count exceeds limit (limit: {}, actual: {})",
self.options.limits.face_count, self.face_count
),
));
}
Ok(())
}
/// Parses the vertices of the `off` string.
fn parse_vertices(&mut self) -> Result {
for _ in 0..self.vertex_count {
let (line_index, line) = self.next_line().ok_or_else(|| {
Error::with_message(
Kind::Missing,
self.prev_line_index + 1,
"Expected vertex definition",
)
})?;
let parts = line.split_line();
let vertex = self.parse_vertex(line_index, &parts)?;
self.document.vertices.push(vertex);
}
Ok(())
}
/// Parses a vertex from a `off` string.
fn parse_vertex(&mut self, line_index: usize, parts: &[&str]) -> Result<Vertex> {
if parts.len() < 3 {
return Err(Error::with_message(
Kind::InvalidVertexPosition,
line_index,
format!(
"Not enough parts for position (expected: >= 3, actual: {})",
parts.len()
),
));
}
let position = Parser::parse_position(line_index, &parts[0..=2])?;
let color = if parts.len() > 3 {
Some(self.parse_color(line_index, &parts[3..])?)
} else {
None
};
Ok(Vertex { position, color })
}
/// Parses a position from a `off` string.
fn parse_position(line_index: usize, parts: &[&str]) -> Result<Position> {
if parts.len() != 3 {
return Err(Error::with_message(
Kind::InvalidVertexPosition,
line_index,
format!(
"Invalid number of coordinates given (expected: 3, actual: {})",
parts.len()
),
));
}
let position_parts: Vec<f32> = parts
.iter()
.map(|s| {
s.parse().map_err(|err| {
Error::with_message(
Kind::InvalidVertexPosition,
line_index,
format!("Failed to parse coordinate as number: ({})", err),
)
})
})
.collect::<Result<Vec<f32>>>()?;
Position::try_from(position_parts).map_err(|err| {
Error::with_message(
Kind::InvalidVertexPosition,
line_index,
format!("Failed to parse position: ({})", err),
)
})
}
/// Parses a color from a `off` string.
fn parse_color(&mut self, line_index: usize, parts: &[&str]) -> Result<Color> {
if parts.len() != self.options.color_format.channel_count() {
return Err(Error::with_message(
Kind::InvalidColor,
line_index,
format!(
"Invalid number of color elements given (expected: {}, actual: {})",
self.options.color_format.channel_count(),
parts.len()
),
));
}
if self.options.color_format.is_float() {
// parse as f32
let color_parts = parts
.iter()
.map(|s| {
s.parse::<f32>().map_err(|err| {
Error::with_message(
Kind::InvalidColor,
line_index,
format!("Failed to parse color as float: {}", err),
)
})
})
.collect::<Result<Vec<f32>>>()?;
Color::try_from(color_parts).map_err(|err| {
Error::with_message(
Kind::InvalidColor,
line_index,
format!("Failed to parse color: {}", err),
)
})
} else {
// parse as u8
let color_parts = parts
.iter()
.map(|s| {
s.parse::<u8>().map_err(|err| {
Error::with_message(
Kind::InvalidColor,
line_index,
format!("Failed to parse color as u8: {}", err),
)
})
})
.collect::<Result<Vec<u8>>>()?;
Color::try_from(color_parts).map_err(|err| {
Error::with_message(
Kind::InvalidColor,
line_index,
format!("Failed to parse color: {}", err),
)
})
}
}
/// Parses the faces of the `off` string.
fn parse_faces(&mut self) -> Result {
for _ in 0..self.face_count {
let (line_index, line) = self.next_line().ok_or_else(|| {
Error::with_message(
Kind::Missing,
self.prev_line_index + 1,
"Expected face definition",
)
})?;
let parts: Vec<&str> = line.split_line();
let face = self.parse_face(line_index, &parts)?;
self.document.faces.push(face);
}
Ok(())
}
/// Parses a face from a `off` string.
fn parse_face(&mut self, line_index: usize, mut parts: &[&str]) -> Result<Face> {
if parts.len() < 4 {
return Err(Error::with_message(
Kind::InvalidFace,
line_index,
format!("Not enough arguments. At least three vertex indicies required (e.g. `3 1 2 3`). {} arguments given", parts.len()),
));
}
let vertex_count: usize = parts[0].parse().map_err(|err| {
Error::with_message(
Kind::InvalidFace,
line_index,
format!("Failed to parse vertex count for face definition: {}", err),
)
})?;
if vertex_count < 3 {
return Err(Error::with_message(
Kind::InvalidFace,
line_index,
format!(
"Vertex count should be at least 3 (actual: {})",
vertex_count
),
));
}
if vertex_count > self.options.limits.face_vertex_count {
return Err(Error::with_message(
Kind::LimitExceeded,
line_index,
format!(
"Vertex count of face exceeds limit (limit: {}, actual: {})",
self.options.limits.face_vertex_count, vertex_count
),
));
}
// "Consume" vertex_count
parts = &parts[1..];
// faces are polygons and might have to be triangulated later. Therefore we require at least three vertices
if parts.len() < 3 {
return Err(Error::with_message(
Kind::InvalidFace,
line_index,
format!(
"Not enough vertices for face (expected: >= 3, actual: {})",
parts.len()
),
));
}
// sanity check: at least vertex_count parts
if parts.len() < vertex_count {
return Err(Error::with_message(
Kind::InvalidFace,
line_index,
format!(
"Not enough vertex indices given for face definition (expected: {}, actual: {})",
vertex_count,
parts.len()
),
));
}
let vertices = Parser::parse_face_indices(line_index, vertex_count, parts)?;
// "Consume" vertex indexes
parts = &parts[vertex_count..];
let color = if parts.is_empty() {
None
} else {
Some(self.parse_color(line_index, parts)?)
};
Ok(Face { vertices, color })
}
/// Parses the face vertex indices from a line.
fn parse_face_indices(
line_index: usize,
vertex_count: usize,
parts: &[&str],
) -> Result<Vec<usize>> {
let vertices: Vec<usize> = parts
.iter()
.take(vertex_count)
.map(|s| {
s.parse().map_err(|err| {
Error::with_message(
Kind::InvalidFaceIndex,
line_index,
format!("Failed to parse vertex index as number: ({})", err),
)
})
})
.collect::<Result<Vec<usize>>>()?;
if vertices.len() != vertex_count {
return Err(Error::with_message(
Kind::InvalidFaceIndex,
line_index,
format!(
"Invalid number of face indexes given (expected: {}, actual: {})",
vertex_count,
vertices.len()
),
));
}
Ok(vertices)
}
/// Finalizes the parsing by returning the [`Mesh`].
fn finalize(self) -> Mesh {
self.document
}
}
#[cfg(test)]
#[allow(unused)]
mod tests {
use crate::parser::color_format::ColorFormat;
use super::*;
#[test]
fn parse_header() {
let mut parser = Parser::new(&"OFF", Options::default());
assert!(parser.parse_header().is_ok());
}
#[test]
fn parse_header_missing() {
let mut parser = Parser::new(&"", Options::default());
let header = parser.parse_header();
assert!(header.is_err());
assert!(matches!(
header.unwrap_err(),
Error {
kind: Kind::Empty,
..
}
));
}
#[test]
fn parse_header_invalid() {
let mut parser = Parser::new(&"COFF", Options::default());
let header = parser.parse_header();
assert!(header.is_err());
assert!(matches!(
header.unwrap_err(),
Error {
kind: Kind::InvalidHeader,
..
}
));
}
#[test]
fn parse_counts() {
let mut parser = Parser::new(&"8 6 12", Options::default());
assert!(parser.parse_counts().is_ok());
assert_eq!(parser.vertex_count, 8);
assert_eq!(parser.face_count, 6);
assert_eq!(parser.edge_count, 12);
}
#[test]
fn parse_counts_missing() {
let mut parser = Parser::new(&"", Options::default());
let counts = parser.parse_counts();
assert!(counts.is_err());
assert!(matches!(
counts.unwrap_err(),
Error {
kind: Kind::Missing,
..
}
));
}
#[test]
fn parse_counts_too_many() {
let mut parser = Parser::new(&"8 6 12 16", Options::default());
let counts = parser.parse_counts();
assert!(counts.is_err());
assert!(matches!(
counts.unwrap_err(),
Error {
kind: Kind::InvalidCounts,
..
}
));
}
#[test]
fn parse_counts_limits() {
let mut parser = Parser::new(&"999999999999 888888888888 777777777", Options::default());
let counts = parser.parse_counts();
assert!(counts.is_err());
assert!(matches!(
counts.unwrap_err(),
Error {
kind: Kind::LimitExceeded,
..
}
));
}
#[test]
fn parse_vertices() {
let mut parser = Parser::new(
&"3.0 1.0 2.0 0.1 0.2 0.3 1.0\n1.0 2.0 3.0 0.1 0.2 0.3 1.0",
Options::default(),
);
parser.vertex_count = 2;
let result = parser.parse_vertices();
assert!(result.is_ok());
assert!(parser.next_line().is_none());
assert!(parser.document.vertices.len() == 2);
assert!(
parser.document.vertices[0]
== Vertex::new(
Position::new(3.0, 1.0, 2.0),
Some(Color::new(0.1, 0.2, 0.3, 1.0).unwrap()),
)
);
assert!(
parser.document.vertices[1]
== Vertex::new(
Position::new(1.0, 2.0, 3.0),
Some(Color::new(0.1, 0.2, 0.3, 1.0).unwrap()),
)
);
}
#[test]
fn parse_vertex() {
let mut parser = Parser::new(&"", Options::default());
let vertex = parser.parse_vertex(0, &["1.0", "2.0", "3.0"]);
assert!(vertex.is_ok());
assert_eq!(
vertex.unwrap(),
Vertex::new(Position::new(1.0, 2.0, 3.0), None)
);
}
#[test]
fn parse_vertex_too_few_parts() {
let mut parser = Parser::new(&"", Options::default());
let vertex = parser.parse_vertex(0, &["1.0", "2.0"]);
assert!(vertex.is_err());
assert!(matches!(
vertex.unwrap_err(),
Error {
kind: Kind::InvalidVertexPosition,
..
}
));
}
#[test]
fn parse_position() {
let position = Parser::parse_position(0, &["1", "2", "3"]);
assert!(position.is_ok());
assert_eq!(
position.unwrap(),
Position {
x: 1.0,
y: 2.0,
z: 3.0
}
);
}
#[test]
fn parse_position_no_number() {
let position = Parser::parse_position(0, &["1", "2", "a"]);
assert!(position.is_err());
assert!(matches!(
position.unwrap_err(),
Error {
kind: Kind::InvalidVertexPosition,
..
}
));
}
#[test]
fn parse_position_too_few_parts() {
let position = Parser::parse_position(0, &["1", "2"]);
assert!(position.is_err());
assert!(matches!(
position.unwrap_err(),
Error {
kind: Kind::InvalidVertexPosition,
..
}
));
}
#[test]
fn parse_position_too_many_parts() {
let position = Parser::parse_position(0, &["1", "2", "3", "5"]);
assert!(position.is_err());
assert!(matches!(
position.unwrap_err(),
Error {
kind: Kind::InvalidVertexPosition,
..
}
));
}
#[test]
fn parse_color_rgbfloat() {
let mut parser = Parser::new(
&"",
Options {
color_format: ColorFormat::RGBFloat,
..Options::default()
},
);
let color = parser.parse_color(0, &["1.0", "0.5", "0.3"]);
assert!(color.is_ok());
assert_eq!(
color.unwrap(),
Color {
red: 1.0,
green: 0.5,
blue: 0.3,
alpha: 1.0,
}
);
}
#[test]
fn parse_color_rgbafloat() {
let mut parser = Parser::new(
&"",
Options {
color_format: ColorFormat::RGBAFloat,
..Options::default()
},
);
let color = parser.parse_color(0, &["1.0", "0.5", "0.3", "0.5"]);
assert!(color.is_ok());
assert_eq!(
color.unwrap(),
Color {
red: 1.0,
green: 0.5,
blue: 0.3,
alpha: 0.5,
}
);
}
#[test]
fn parse_color_rgbinterger() {
let mut parser = Parser::new(
&"",
Options {
color_format: ColorFormat::RGBInteger,
..Options::default()
},
);
let color = parser.parse_color(0, &["255", "128", "0"]);
assert!(color.is_ok());
assert_eq!(
color.unwrap(),
Color {
red: 1.0,
green: 0.501_960_8,
blue: 0.0,
alpha: 1.0,
}
);
}
#[test]
fn parse_color_rgbinterger_fail() {
let mut parser = Parser::new(
&"",
Options {
color_format: ColorFormat::RGBInteger,
..Options::default()
},
);
let color = parser.parse_color(0, &["255", "128.0", "0"]);
assert!(color.is_err());
assert!(matches!(
color.unwrap_err(),
Error {
kind: Kind::InvalidColor,
..
}
));
}
#[test]
fn parse_color_rgbainterger() {
let mut parser = Parser::new(
&"",
Options {
color_format: ColorFormat::RGBAInteger,
..Options::default()
},
);
let color = parser.parse_color(0, &["255", "128", "0", "255"]);
assert!(color.is_ok());
assert_eq!(
color.unwrap(),
Color {
red: 1.0,
green: 0.501_960_8,
blue: 0.0,
alpha: 1.0,
}
);
}
#[test]
fn parse_color_element_count() {
let mut parser = Parser::new(
&"",
Options {
color_format: ColorFormat::RGBFloat,
..Options::default()
},
);
let color = parser.parse_color(0, &["1.0", "0.5", "0.3", "0.4"]);
assert!(color.is_err());
assert!(matches!(
color.unwrap_err(),
Error {
kind: Kind::InvalidColor,
..
}
));
}
#[test]
fn parse_faces() {
let mut parser = Parser::new(
&"3 1 2 3 0.1 0.2 0.3 1.0\n3 3 2 1 0.2 0.3 0.4 1.0",
Options::default(),
);
parser.face_count = 2;
let result = parser.parse_faces();
assert!(result.is_ok());
assert!(parser.next_line().is_none());
assert!(parser.document.faces.len() == 2);
assert!(parser.document.faces[0].vertices == vec![1, 2, 3]);
assert!(
parser.document.faces[0].color
== Some(Color {
red: 0.1,
green: 0.2,
blue: 0.3,
alpha: 1.0,
})
);
assert!(parser.document.faces[1].vertices == vec![3, 2, 1]);
assert!(
parser.document.faces[1].color
== Some(Color {
red: 0.2,
green: 0.3,
blue: 0.4,
alpha: 1.0,
})
);
}
#[test]
fn parse_face() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["3", "1", "2", "3"]);
assert!(result.is_ok());
assert_eq!(
result.unwrap(),
Face {
vertices: vec![1, 2, 3],
color: None
}
);
}
#[test]
fn parse_face_more() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["4", "2", "3", "1", "1337"]);
assert!(result.is_ok());
assert_eq!(
result.unwrap(),
Face {
vertices: vec![2, 3, 1, 1337],
color: None
}
);
}
#[test]
fn parse_face_too_little_parts() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["6", "1", "2", "3"]);
assert!(result.is_err());
assert!(matches!(
result.unwrap_err(),
Error {
kind: Kind::InvalidFace,
..
}
));
}
#[test]
fn parse_face_too_many_parts() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["3", "2", "3", "2", "3"]);
assert!(result.is_err());
assert!(matches!(
result.unwrap_err(),
Error {
kind: Kind::InvalidColor,
..
}
));
}
#[test]
fn parse_face_no_number() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["3", "1", "asdf", "3"]);
assert!(result.is_err());
println!("{:?}", result);
assert!(matches!(
result.unwrap_err(),
Error {
kind: Kind::InvalidFaceIndex,
..
}
));
}
#[test]
fn parse_face_color() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["3", "1", "2", "3", "0.1", "0.2", "0.3", "0.4"]);
assert!(result.is_ok());
assert_eq!(
result.unwrap(),
Face {
vertices: vec![1, 2, 3],
color: Some(Color {
red: 0.1,
green: 0.2,
blue: 0.3,
alpha: 0.4
})
}
);
}
#[test]
fn parse_face_color_fail() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["3", "1", "2", "3", "0.1", "0.2"]);
assert!(result.is_err());
assert!(matches!(
result.unwrap_err(),
Error {
kind: Kind::InvalidColor,
..
}
));
}
#[test]
fn parse_face_color_fail_no_alpha() {
let mut parser = Parser::new(
&"",
Options {
color_format: ColorFormat::RGBFloat,
..Options::default()
},
);
let result = parser.parse_face(0, &["3", "1", "2", "3", "0.1", "0.2", "0.3"]);
assert!(result.is_ok());
assert_eq!(
result.unwrap(),
Face {
vertices: vec![1, 2, 3],
color: Some(Color {
red: 0.1,
green: 0.2,
blue: 0.3,
alpha: 1.0
})
}
);
}
#[test]
fn parse_face_color_fail_no_alpha_fail() {
let mut parser = Parser::new(&"", Options::default());
let result = parser.parse_face(0, &["3", "1", "2", "3", "0.1", "0.2", "0.3"]);
assert!(result.is_err());
assert!(matches!(
result.unwrap_err(),
Error {
kind: Kind::InvalidColor,
..
}
));
}
#[test]
fn parse_face_index() {
let result = Parser::parse_face_indices(0, 3, &["1", "2", "3"]);
assert!(result.is_ok());
assert_eq!(result.unwrap(), vec![1, 2, 3]);
}
#[test]
fn parse_face_index_more() {
let result = Parser::parse_face_indices(0, 5, &["1", "2", "3", "1", "1337"]);
assert!(result.is_ok());
assert_eq!(result.unwrap(), vec![1, 2, 3, 1, 1337]);
}
#[test]
fn parse_face_index_too_little_parts() {
let result = Parser::parse_face_indices(0, 5, &["1", "2", "3"]);
assert!(result.is_err());
assert!(matches!(
result.unwrap_err(),
Error {
kind: Kind::InvalidFaceIndex,
..
}
));
}
#[test]
fn parse_face_index_too_many_parts() {
let result = Parser::parse_face_indices(0, 3, &["1", "2", "3", "2", "3"]);
assert!(result.is_ok());
assert_eq!(result.unwrap(), vec![1, 2, 3]);
}
#[test]
fn parse_face_index_no_number() {
let result = Parser::parse_face_indices(0, 3, &["1", "asdf", "3"]);
assert!(result.is_err());
assert!(matches!(
result.unwrap_err(),
Error {
kind: Kind::InvalidFaceIndex,
..
}
));
}
}
| 28.846856 | 139 | 0.440741 |
1460635557fe0c572e41cc835ccdb97bb1f10c67 | 1,458 | #[cfg(test)]
mod process_token_tests {
use super::jwt::JwtToken;
use super::process_token;
use actix_web::test::TestRequest;
#[test]
fn no_token_process_token() {
let mock_request = TestRequest::with_header("test", "test").to_srv_request();
match process_token(&mock_request) {
Err(message) => assert_eq!("there is no token", message),
_ => panic!("No token in request header should fail"),
}
}
#[test]
fn incorrect_token() {
let mock_request = TestRequest::with_header("user-token", "test").to_srv_request();
match process_token(&mock_request) {
Err(message) => assert_eq!("Could not decode", message),
_ => panic!("Incorrect token should error"),
}
}
#[test]
fn correct_token() {
let encoded_token: String = JwtToken::encode(32);
let mock_request = TestRequest::with_header("user-token", encoded_token).to_srv_request();
match process_token(&mock_request) {
Ok(token) => assert_eq!("passed", token),
_ => panic!("encoded token should pass"),
}
}
}
pub mod jwt;
mod processes;
use actix_web::dev::ServiceRequest;
pub fn process_token(request: &ServiceRequest) -> Result<String, &'static str> {
match processes::extract_header_token(request) {
Ok(token) => processes::check_password(token),
Err(message) => Err(message),
}
}
| 29.16 | 98 | 0.615912 |
e223aa7bc765621233d07465ab0865da9a849f4c | 36,877 | //! Formatters for logging `tracing` events.
use super::time::{self, FormatTime, SystemTime};
use crate::{
field::{MakeOutput, MakeVisitor, RecordFields, VisitFmt, VisitOutput},
fmt::fmt_subscriber::FmtContext,
fmt::fmt_subscriber::FormattedFields,
registry::LookupSpan,
};
use std::{
fmt::{self, Write},
iter,
marker::PhantomData,
};
use tracing_core::{
field::{self, Field, Visit},
span, Collect, Event, Level,
};
#[cfg(feature = "tracing-log")]
use tracing_log::NormalizeEvent;
#[cfg(feature = "ansi")]
use ansi_term::{Colour, Style};
#[cfg(feature = "json")]
mod json;
#[cfg(feature = "json")]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
pub use json::*;
#[cfg(feature = "ansi")]
mod pretty;
#[cfg(feature = "ansi")]
#[cfg_attr(docsrs, doc(cfg(feature = "ansi")))]
pub use pretty::*;
use fmt::{Debug, Display};
/// A type that can format a tracing `Event` for a `fmt::Write`.
///
/// `FormatEvent` is primarily used in the context of [`fmt::Collector`] or [`fmt::Subscriber`]. Each time an event is
/// dispatched to [`fmt::Collector`] or [`fmt::Subscriber`], the subscriber or layer forwards it to
/// its associated `FormatEvent` to emit a log message.
///
/// This trait is already implemented for function pointers with the same
/// signature as `format_event`.
///
/// [`fmt::Collector`]: super::Collector
/// [`fmt::Subscriber`]: super::Subscriber
pub trait FormatEvent<S, N>
where
S: Collect + for<'a> LookupSpan<'a>,
N: for<'a> FormatFields<'a> + 'static,
{
/// Write a log message for `Event` in `Context` to the given `Write`.
fn format_event(
&self,
ctx: &FmtContext<'_, S, N>,
writer: &mut dyn fmt::Write,
event: &Event<'_>,
) -> fmt::Result;
}
impl<S, N> FormatEvent<S, N>
for fn(ctx: &FmtContext<'_, S, N>, &mut dyn fmt::Write, &Event<'_>) -> fmt::Result
where
S: Collect + for<'a> LookupSpan<'a>,
N: for<'a> FormatFields<'a> + 'static,
{
fn format_event(
&self,
ctx: &FmtContext<'_, S, N>,
writer: &mut dyn fmt::Write,
event: &Event<'_>,
) -> fmt::Result {
(*self)(ctx, writer, event)
}
}
/// A type that can format a [set of fields] to a `fmt::Write`.
///
/// `FormatFields` is primarily used in the context of [`fmt::Subscriber`]. Each
/// time a span or event with fields is recorded, the subscriber will format
/// those fields with its associated `FormatFields` implementation.
///
/// [set of fields]: RecordFields
/// [`fmt::Subscriber`]: super::Subscriber
pub trait FormatFields<'writer> {
/// Format the provided `fields` to the provided `writer`, returning a result.
fn format_fields<R: RecordFields>(
&self,
writer: &'writer mut dyn fmt::Write,
fields: R,
) -> fmt::Result;
/// Record additional field(s) on an existing span.
///
/// By default, this appends a space to the current set of fields if it is
/// non-empty, and then calls `self.format_fields`. If different behavior is
/// required, the default implementation of this method can be overridden.
fn add_fields(&self, current: &'writer mut String, fields: &span::Record<'_>) -> fmt::Result {
if !current.is_empty() {
current.push(' ');
}
self.format_fields(current, fields)
}
}
/// Returns the default configuration for an [event formatter].
///
/// Methods on the returned event formatter can be used for further
/// configuration. For example:
///
/// ```rust
/// let format = tracing_subscriber::fmt::format()
/// .without_time() // Don't include timestamps
/// .with_target(false) // Don't include event targets.
/// .with_level(false) // Don't include event levels.
/// .compact(); // Use a more compact, abbreviated format.
///
/// // Use the configured formatter when building a new subscriber.
/// tracing_subscriber::fmt()
/// .event_format(format)
/// .init();
/// ```
pub fn format() -> Format {
Format::default()
}
/// Returns the default configuration for a JSON [event formatter].
#[cfg(feature = "json")]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
pub fn json() -> Format<Json> {
format().json()
}
/// Returns a [`FormatFields`] implementation that formats fields using the
/// provided function or closure.
///
pub fn debug_fn<F>(f: F) -> FieldFn<F>
where
F: Fn(&mut dyn fmt::Write, &Field, &dyn fmt::Debug) -> fmt::Result + Clone,
{
FieldFn(f)
}
/// A [`FormatFields`] implementation that formats fields by calling a function
/// or closure.
///
#[derive(Debug, Clone)]
pub struct FieldFn<F>(F);
/// The [visitor] produced by [`FieldFn`]'s [`MakeVisitor`] implementation.
///
/// [visitor]: super::super::field::Visit
/// [`MakeVisitor`]: super::super::field::MakeVisitor
pub struct FieldFnVisitor<'a, F> {
f: F,
writer: &'a mut dyn fmt::Write,
result: fmt::Result,
}
/// Marker for `Format` that indicates that the compact log format should be used.
///
/// The compact format only includes the fields from the most recently entered span.
#[derive(Default, Debug, Copy, Clone, Eq, PartialEq)]
pub struct Compact;
/// Marker for `Format` that indicates that the verbose log format should be used.
///
/// The full format includes fields from all entered spans.
#[derive(Default, Debug, Copy, Clone, Eq, PartialEq)]
pub struct Full;
/// A pre-configured event formatter.
///
/// You will usually want to use this as the `FormatEvent` for a `FmtSubscriber`.
///
/// The default logging format, [`Full`] includes all fields in each event and its containing
/// spans. The [`Compact`] logging format includes only the fields from the most-recently-entered
/// span.
#[derive(Debug, Clone)]
pub struct Format<F = Full, T = SystemTime> {
format: F,
pub(crate) timer: T,
pub(crate) ansi: bool,
pub(crate) display_target: bool,
pub(crate) display_level: bool,
pub(crate) display_thread_id: bool,
pub(crate) display_thread_name: bool,
}
impl Default for Format<Full, SystemTime> {
fn default() -> Self {
Format {
format: Full,
timer: SystemTime,
ansi: true,
display_target: true,
display_level: true,
display_thread_id: false,
display_thread_name: false,
}
}
}
impl<F, T> Format<F, T> {
/// Use a less verbose output format.
///
/// See [`Compact`].
pub fn compact(self) -> Format<Compact, T> {
Format {
format: Compact,
timer: self.timer,
ansi: self.ansi,
display_target: false,
display_level: self.display_level,
display_thread_id: self.display_thread_id,
display_thread_name: self.display_thread_name,
}
}
/// Use an excessively pretty, human-readable output format.
///
/// See [`Pretty`].
///
/// Note that this requires the "ansi" feature to be enabled.
#[cfg(feature = "ansi")]
#[cfg_attr(docsrs, doc(cfg(feature = "ansi")))]
pub fn pretty(self) -> Format<Pretty, T> {
Format {
format: Pretty::default(),
timer: self.timer,
ansi: self.ansi,
display_target: self.display_target,
display_level: self.display_level,
display_thread_id: self.display_thread_id,
display_thread_name: self.display_thread_name,
}
}
/// Use the full JSON format.
///
/// The full format includes fields from all entered spans.
///
/// # Example Output
///
/// ```ignore,json
/// {"timestamp":"Feb 20 11:28:15.096","level":"INFO","target":"mycrate","fields":{"message":"some message", "key": "value"}}
/// ```
///
/// # Options
///
/// - [`Format::flatten_event`] can be used to enable flattening event fields into the root
/// object.
///
#[cfg(feature = "json")]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
pub fn json(self) -> Format<Json, T> {
Format {
format: Json::default(),
timer: self.timer,
ansi: self.ansi,
display_target: self.display_target,
display_level: self.display_level,
display_thread_id: self.display_thread_id,
display_thread_name: self.display_thread_name,
}
}
/// Use the given [`timer`] for log message timestamps.
///
/// See [`time`] for the provided timer implementations.
///
/// Note that using the `chrono` feature flag enables the
/// additional time formatters [`ChronoUtc`] and [`ChronoLocal`].
///
/// [`timer`]: time::FormatTime
/// [`ChronoUtc`]: time::ChronoUtc
/// [`ChronoLocal`]: time::ChronoLocal
pub fn with_timer<T2>(self, timer: T2) -> Format<F, T2> {
Format {
format: self.format,
timer,
ansi: self.ansi,
display_target: self.display_target,
display_level: self.display_level,
display_thread_id: self.display_thread_id,
display_thread_name: self.display_thread_name,
}
}
/// Do not emit timestamps with log messages.
pub fn without_time(self) -> Format<F, ()> {
Format {
format: self.format,
timer: (),
ansi: self.ansi,
display_target: self.display_target,
display_level: self.display_level,
display_thread_id: self.display_thread_id,
display_thread_name: self.display_thread_name,
}
}
/// Enable ANSI terminal colors for formatted output.
pub fn with_ansi(self, ansi: bool) -> Format<F, T> {
Format { ansi, ..self }
}
/// Sets whether or not an event's target is displayed.
pub fn with_target(self, display_target: bool) -> Format<F, T> {
Format {
display_target,
..self
}
}
/// Sets whether or not an event's level is displayed.
pub fn with_level(self, display_level: bool) -> Format<F, T> {
Format {
display_level,
..self
}
}
/// Sets whether or not the [thread ID] of the current thread is displayed
/// when formatting events
///
/// [thread ID]: std::thread::ThreadId
pub fn with_thread_ids(self, display_thread_id: bool) -> Format<F, T> {
Format {
display_thread_id,
..self
}
}
/// Sets whether or not the [name] of the current thread is displayed
/// when formatting events
///
/// [name]: std::thread#naming-threads
pub fn with_thread_names(self, display_thread_name: bool) -> Format<F, T> {
Format {
display_thread_name,
..self
}
}
fn format_level(&self, level: Level, writer: &mut dyn fmt::Write) -> fmt::Result
where
F: LevelNames,
{
if self.display_level {
let fmt_level = {
#[cfg(feature = "ansi")]
{
F::format_level(level, self.ansi)
}
#[cfg(not(feature = "ansi"))]
{
F::format_level(level)
}
};
return write!(writer, "{} ", fmt_level);
}
Ok(())
}
}
#[cfg(feature = "json")]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
impl<T> Format<Json, T> {
/// Use the full JSON format with the event's event fields flattened.
///
/// # Example Output
///
/// ```ignore,json
/// {"timestamp":"Feb 20 11:28:15.096","level":"INFO","target":"mycrate", "message":"some message", "key": "value"}
/// ```
/// See [`Json`]
#[cfg(feature = "json")]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
pub fn flatten_event(mut self, flatten_event: bool) -> Format<Json, T> {
self.format.flatten_event(flatten_event);
self
}
/// Sets whether or not the formatter will include the current span in
/// formatted events.
///
/// See [`Json`]
#[cfg(feature = "json")]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
pub fn with_current_span(mut self, display_current_span: bool) -> Format<Json, T> {
self.format.with_current_span(display_current_span);
self
}
/// Sets whether or not the formatter will include a list (from root to
/// leaf) of all currently entered spans in formatted events.
///
/// See [`Json`]
#[cfg(feature = "json")]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
pub fn with_span_list(mut self, display_span_list: bool) -> Format<Json, T> {
self.format.with_span_list(display_span_list);
self
}
}
impl<S, N, T> FormatEvent<S, N> for Format<Full, T>
where
S: Collect + for<'a> LookupSpan<'a>,
N: for<'a> FormatFields<'a> + 'static,
T: FormatTime,
{
fn format_event(
&self,
ctx: &FmtContext<'_, S, N>,
writer: &mut dyn fmt::Write,
event: &Event<'_>,
) -> fmt::Result {
#[cfg(feature = "tracing-log")]
let normalized_meta = event.normalized_metadata();
#[cfg(feature = "tracing-log")]
let meta = normalized_meta.as_ref().unwrap_or_else(|| event.metadata());
#[cfg(not(feature = "tracing-log"))]
let meta = event.metadata();
#[cfg(feature = "ansi")]
time::write(&self.timer, writer, self.ansi)?;
#[cfg(not(feature = "ansi"))]
time::write(&self.timer, writer)?;
self.format_level(*meta.level(), writer)?;
if self.display_thread_name {
let current_thread = std::thread::current();
match current_thread.name() {
Some(name) => {
write!(writer, "{} ", FmtThreadName::new(name))?;
}
// fall-back to thread id when name is absent and ids are not enabled
None if !self.display_thread_id => {
write!(writer, "{:0>2?} ", current_thread.id())?;
}
_ => {}
}
}
if self.display_thread_id {
write!(writer, "{:0>2?} ", std::thread::current().id())?;
}
let full_ctx = {
#[cfg(feature = "ansi")]
{
FullCtx::new(ctx, event.parent(), self.ansi)
}
#[cfg(not(feature = "ansi"))]
{
FullCtx::new(ctx, event.parent())
}
};
write!(writer, "{}", full_ctx)?;
if self.display_target {
write!(writer, "{}: ", meta.target())?;
}
ctx.format_fields(writer, event)?;
writeln!(writer)
}
}
impl<S, N, T> FormatEvent<S, N> for Format<Compact, T>
where
S: Collect + for<'a> LookupSpan<'a>,
N: for<'a> FormatFields<'a> + 'static,
T: FormatTime,
{
fn format_event(
&self,
ctx: &FmtContext<'_, S, N>,
writer: &mut dyn fmt::Write,
event: &Event<'_>,
) -> fmt::Result {
#[cfg(feature = "tracing-log")]
let normalized_meta = event.normalized_metadata();
#[cfg(feature = "tracing-log")]
let meta = normalized_meta.as_ref().unwrap_or_else(|| event.metadata());
#[cfg(not(feature = "tracing-log"))]
let meta = event.metadata();
#[cfg(feature = "ansi")]
time::write(&self.timer, writer, self.ansi)?;
#[cfg(not(feature = "ansi"))]
time::write(&self.timer, writer)?;
self.format_level(*meta.level(), writer)?;
if self.display_thread_name {
let current_thread = std::thread::current();
match current_thread.name() {
Some(name) => {
write!(writer, "{} ", FmtThreadName::new(name))?;
}
// fall-back to thread id when name is absent and ids are not enabled
None if !self.display_thread_id => {
write!(writer, "{:0>2?} ", current_thread.id())?;
}
_ => {}
}
}
if self.display_thread_id {
write!(writer, "{:0>2?} ", std::thread::current().id())?;
}
if self.display_target {
let target = meta.target();
#[cfg(feature = "ansi")]
let target = if self.ansi {
Style::new().bold().paint(target)
} else {
Style::new().paint(target)
};
write!(writer, "{}:", target)?;
}
ctx.format_fields(writer, event)?;
let span = event
.parent()
.and_then(|id| ctx.ctx.span(&id))
.or_else(|| ctx.ctx.lookup_current());
let scope = span.into_iter().flat_map(|span| {
let parents = span.parents();
iter::once(span).chain(parents)
});
#[cfg(feature = "ansi")]
let dimmed = if self.ansi {
Style::new().dimmed()
} else {
Style::new()
};
for span in scope {
let exts = span.extensions();
if let Some(fields) = exts.get::<FormattedFields<N>>() {
if !fields.is_empty() {
#[cfg(feature = "ansi")]
let fields = dimmed.paint(fields.as_str());
write!(writer, " {}", fields)?;
}
}
}
writeln!(writer)
}
}
// === impl FormatFields ===
impl<'writer, M> FormatFields<'writer> for M
where
M: MakeOutput<&'writer mut dyn fmt::Write, fmt::Result>,
M::Visitor: VisitFmt + VisitOutput<fmt::Result>,
{
fn format_fields<R: RecordFields>(
&self,
writer: &'writer mut dyn fmt::Write,
fields: R,
) -> fmt::Result {
let mut v = self.make_visitor(writer);
fields.record(&mut v);
v.finish()
}
}
/// The default [`FormatFields`] implementation.
///
#[derive(Debug)]
pub struct DefaultFields {
// reserve the ability to add fields to this without causing a breaking
// change in the future.
_private: (),
}
/// The [visitor] produced by [`DefaultFields`]'s [`MakeVisitor`] implementation.
///
/// [visitor]: super::super::field::Visit
/// [`MakeVisitor`]: super::super::field::MakeVisitor
pub struct DefaultVisitor<'a> {
writer: &'a mut dyn Write,
is_empty: bool,
result: fmt::Result,
}
impl DefaultFields {
/// Returns a new default [`FormatFields`] implementation.
///
pub fn new() -> Self {
Self { _private: () }
}
}
impl Default for DefaultFields {
fn default() -> Self {
Self::new()
}
}
impl<'a> MakeVisitor<&'a mut dyn Write> for DefaultFields {
type Visitor = DefaultVisitor<'a>;
#[inline]
fn make_visitor(&self, target: &'a mut dyn Write) -> Self::Visitor {
DefaultVisitor::new(target, true)
}
}
// === impl DefaultVisitor ===
impl<'a> DefaultVisitor<'a> {
/// Returns a new default visitor that formats to the provided `writer`.
///
/// # Arguments
/// - `writer`: the writer to format to.
/// - `is_empty`: whether or not any fields have been previously written to
/// that writer.
pub fn new(writer: &'a mut dyn Write, is_empty: bool) -> Self {
Self {
writer,
is_empty,
result: Ok(()),
}
}
fn maybe_pad(&mut self) {
if self.is_empty {
self.is_empty = false;
} else {
self.result = write!(self.writer, " ");
}
}
}
impl<'a> field::Visit for DefaultVisitor<'a> {
fn record_str(&mut self, field: &Field, value: &str) {
if self.result.is_err() {
return;
}
if field.name() == "message" {
self.record_debug(field, &format_args!("{}", value))
} else {
self.record_debug(field, &value)
}
}
fn record_error(&mut self, field: &Field, value: &(dyn std::error::Error + 'static)) {
if let Some(source) = value.source() {
self.record_debug(field, &format_args!("{}, {}: {}", value, field, source))
} else {
self.record_debug(field, &format_args!("{}", value))
}
}
fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
if self.result.is_err() {
return;
}
self.maybe_pad();
self.result = match field.name() {
"message" => write!(self.writer, "{:?}", value),
// Skip fields that are actually log metadata that have already been handled
#[cfg(feature = "tracing-log")]
name if name.starts_with("log.") => Ok(()),
name if name.starts_with("r#") => write!(self.writer, "{}={:?}", &name[2..], value),
name => write!(self.writer, "{}={:?}", name, value),
};
}
}
impl<'a> crate::field::VisitOutput<fmt::Result> for DefaultVisitor<'a> {
fn finish(self) -> fmt::Result {
self.result
}
}
impl<'a> crate::field::VisitFmt for DefaultVisitor<'a> {
fn writer(&mut self) -> &mut dyn fmt::Write {
self.writer
}
}
impl<'a> fmt::Debug for DefaultVisitor<'a> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("DefaultVisitor")
.field("writer", &format_args!("<dyn fmt::Write>"))
.field("is_empty", &self.is_empty)
.field("result", &self.result)
.finish()
}
}
struct FullCtx<'a, S, N>
where
S: Collect + for<'lookup> LookupSpan<'lookup>,
N: for<'writer> FormatFields<'writer> + 'static,
{
ctx: &'a FmtContext<'a, S, N>,
span: Option<&'a span::Id>,
#[cfg(feature = "ansi")]
ansi: bool,
}
impl<'a, S, N: 'a> FullCtx<'a, S, N>
where
S: Collect + for<'lookup> LookupSpan<'lookup>,
N: for<'writer> FormatFields<'writer> + 'static,
{
#[cfg(feature = "ansi")]
pub(crate) fn new(
ctx: &'a FmtContext<'a, S, N>,
span: Option<&'a span::Id>,
ansi: bool,
) -> Self {
Self { ctx, span, ansi }
}
#[cfg(not(feature = "ansi"))]
pub(crate) fn new(ctx: &'a FmtContext<'a, S, N>, span: Option<&'a span::Id>) -> Self {
Self { ctx, span }
}
fn bold(&self) -> Style {
#[cfg(feature = "ansi")]
{
if self.ansi {
return Style::new().bold();
}
}
Style::new()
}
}
impl<'a, S, N> fmt::Display for FullCtx<'a, S, N>
where
S: Collect + for<'lookup> LookupSpan<'lookup>,
N: for<'writer> FormatFields<'writer> + 'static,
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let bold = self.bold();
let mut seen = false;
let span = self
.span
.and_then(|id| self.ctx.ctx.span(&id))
.or_else(|| self.ctx.ctx.lookup_current());
let scope = span
.into_iter()
.flat_map(|span| span.from_root().chain(iter::once(span)));
for span in scope {
write!(f, "{}", bold.paint(span.metadata().name()))?;
seen = true;
let ext = span.extensions();
let fields = &ext
.get::<FormattedFields<N>>()
.expect("Unable to find FormattedFields in extensions; this is a bug");
if !fields.is_empty() {
write!(f, "{}{}{}", bold.paint("{"), fields, bold.paint("}"))?;
}
f.write_char(':')?;
}
if seen {
f.write_char(' ')?;
}
Ok(())
}
}
#[cfg(not(feature = "ansi"))]
struct Style;
#[cfg(not(feature = "ansi"))]
impl Style {
fn new() -> Self {
Style
}
fn paint(&self, d: impl fmt::Display) -> impl fmt::Display {
d
}
}
struct FmtThreadName<'a> {
name: &'a str,
}
impl<'a> FmtThreadName<'a> {
pub(crate) fn new(name: &'a str) -> Self {
Self { name }
}
}
impl<'a> fmt::Display for FmtThreadName<'a> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
use std::sync::atomic::{
AtomicUsize,
Ordering::{AcqRel, Acquire, Relaxed},
};
// Track the longest thread name length we've seen so far in an atomic,
// so that it can be updated by any thread.
static MAX_LEN: AtomicUsize = AtomicUsize::new(0);
let len = self.name.len();
// Snapshot the current max thread name length.
let mut max_len = MAX_LEN.load(Relaxed);
while len > max_len {
// Try to set a new max length, if it is still the value we took a
// snapshot of.
match MAX_LEN.compare_exchange(max_len, len, AcqRel, Acquire) {
// We successfully set the new max value
Ok(_) => break,
// Another thread set a new max value since we last observed
// it! It's possible that the new length is actually longer than
// ours, so we'll loop again and check whether our length is
// still the longest. If not, we'll just use the newer value.
Err(actual) => max_len = actual,
}
}
// pad thread name using `max_len`
write!(f, "{:>width$}", self.name, width = max_len)
}
}
trait LevelNames {
const TRACE_STR: &'static str;
const DEBUG_STR: &'static str;
const INFO_STR: &'static str;
const WARN_STR: &'static str;
const ERROR_STR: &'static str;
#[cfg(feature = "ansi")]
fn format_level(level: Level, ansi: bool) -> FmtLevel<Self> {
FmtLevel {
level,
ansi,
_f: PhantomData,
}
}
#[cfg(not(feature = "ansi"))]
fn format_level(level: Level) -> FmtLevel<Self> {
FmtLevel {
level,
_f: PhantomData,
}
}
}
impl LevelNames for Full {
const TRACE_STR: &'static str = "TRACE";
const DEBUG_STR: &'static str = "DEBUG";
const INFO_STR: &'static str = " INFO";
const WARN_STR: &'static str = " WARN";
const ERROR_STR: &'static str = "ERROR";
}
impl LevelNames for Compact {
const TRACE_STR: &'static str = "T";
const DEBUG_STR: &'static str = "D";
const INFO_STR: &'static str = "I";
const WARN_STR: &'static str = "W";
const ERROR_STR: &'static str = "!";
}
struct FmtLevel<F: ?Sized> {
level: Level,
#[cfg(feature = "ansi")]
ansi: bool,
_f: PhantomData<fn(F)>,
}
impl<'a, F: LevelNames> fmt::Display for FmtLevel<F> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
#[cfg(feature = "ansi")]
{
if self.ansi {
return match self.level {
Level::TRACE => write!(f, "{}", Colour::Purple.paint(F::TRACE_STR)),
Level::DEBUG => write!(f, "{}", Colour::Blue.paint(F::DEBUG_STR)),
Level::INFO => write!(f, "{}", Colour::Green.paint(F::INFO_STR)),
Level::WARN => write!(f, "{}", Colour::Yellow.paint(F::WARN_STR)),
Level::ERROR => write!(f, "{}", Colour::Red.paint(F::ERROR_STR)),
};
}
}
match self.level {
Level::TRACE => f.pad(F::TRACE_STR),
Level::DEBUG => f.pad(F::DEBUG_STR),
Level::INFO => f.pad(F::INFO_STR),
Level::WARN => f.pad(F::WARN_STR),
Level::ERROR => f.pad(F::ERROR_STR),
}
}
}
// === impl FieldFn ===
impl<'a, F> MakeVisitor<&'a mut dyn fmt::Write> for FieldFn<F>
where
F: Fn(&mut dyn fmt::Write, &Field, &dyn fmt::Debug) -> fmt::Result + Clone,
{
type Visitor = FieldFnVisitor<'a, F>;
fn make_visitor(&self, writer: &'a mut dyn fmt::Write) -> Self::Visitor {
FieldFnVisitor {
writer,
f: self.0.clone(),
result: Ok(()),
}
}
}
impl<'a, F> Visit for FieldFnVisitor<'a, F>
where
F: Fn(&mut dyn fmt::Write, &Field, &dyn fmt::Debug) -> fmt::Result,
{
fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
if self.result.is_ok() {
self.result = (self.f)(&mut self.writer, field, value)
}
}
}
impl<'a, F> VisitOutput<fmt::Result> for FieldFnVisitor<'a, F>
where
F: Fn(&mut dyn fmt::Write, &Field, &dyn fmt::Debug) -> fmt::Result,
{
fn finish(self) -> fmt::Result {
self.result
}
}
impl<'a, F> VisitFmt for FieldFnVisitor<'a, F>
where
F: Fn(&mut dyn fmt::Write, &Field, &dyn fmt::Debug) -> fmt::Result,
{
fn writer(&mut self) -> &mut dyn fmt::Write {
&mut *self.writer
}
}
impl<'a, F> fmt::Debug for FieldFnVisitor<'a, F> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("FieldFnVisitor")
.field("f", &format_args!("<Fn>"))
.field("writer", &format_args!("<dyn fmt::Write>"))
.field("result", &self.result)
.finish()
}
}
// === printing synthetic Span events ===
/// Configures what points in the span lifecycle are logged as events.
///
/// See also [`with_span_events`](super::CollectorBuilder::with_span_events()).
#[derive(Clone, Eq, PartialEq, Ord, PartialOrd)]
pub struct FmtSpan(FmtSpanInner);
impl FmtSpan {
/// spans are ignored (this is the default)
pub const NONE: FmtSpan = FmtSpan(FmtSpanInner::None);
/// one event per enter/exit of a span
pub const ACTIVE: FmtSpan = FmtSpan(FmtSpanInner::Active);
/// one event when the span is dropped
pub const CLOSE: FmtSpan = FmtSpan(FmtSpanInner::Close);
/// events at all points (new, enter, exit, drop)
pub const FULL: FmtSpan = FmtSpan(FmtSpanInner::Full);
}
impl Debug for FmtSpan {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.0 {
FmtSpanInner::None => f.write_str("FmtSpan::NONE"),
FmtSpanInner::Active => f.write_str("FmtSpan::ACTIVE"),
FmtSpanInner::Close => f.write_str("FmtSpan::CLOSE"),
FmtSpanInner::Full => f.write_str("FmtSpan::FULL"),
}
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash, Ord, PartialOrd)]
enum FmtSpanInner {
/// spans are ignored (this is the default)
None,
/// one event per enter/exit of a span
Active,
/// one event when the span is dropped
Close,
/// events at all points (new, enter, exit, drop)
Full,
}
pub(super) struct FmtSpanConfig {
pub(super) kind: FmtSpan,
pub(super) fmt_timing: bool,
}
impl FmtSpanConfig {
pub(super) fn without_time(self) -> Self {
Self {
kind: self.kind,
fmt_timing: false,
}
}
pub(super) fn with_kind(self, kind: FmtSpan) -> Self {
Self {
kind,
fmt_timing: self.fmt_timing,
}
}
pub(super) fn trace_new(&self) -> bool {
matches!(self.kind, FmtSpan::FULL)
}
pub(super) fn trace_active(&self) -> bool {
matches!(self.kind, FmtSpan::ACTIVE | FmtSpan::FULL)
}
pub(super) fn trace_close(&self) -> bool {
matches!(self.kind, FmtSpan::CLOSE | FmtSpan::FULL)
}
}
impl Debug for FmtSpanConfig {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.kind.fmt(f)
}
}
impl Default for FmtSpanConfig {
fn default() -> Self {
Self {
kind: FmtSpan::NONE,
fmt_timing: true,
}
}
}
#[repr(transparent)]
pub(super) struct TimingDisplay(pub(super) u64);
impl Display for TimingDisplay {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let mut t = self.0 as f64;
for unit in ["ns", "µs", "ms", "s"].iter() {
if t < 10.0 {
return write!(f, "{:.2}{}", t, unit);
} else if t < 100.0 {
return write!(f, "{:.1}{}", t, unit);
} else if t < 1000.0 {
return write!(f, "{:.0}{}", t, unit);
}
t /= 1000.0;
}
write!(f, "{:.0}s", t * 1000.0)
}
}
#[cfg(test)]
pub(super) mod test {
use crate::fmt::{test::MockMakeWriter, time::FormatTime};
use tracing::{
self,
collect::with_default,
dispatch::{set_default, Dispatch},
};
use super::TimingDisplay;
use std::fmt;
pub(crate) struct MockTime;
impl FormatTime for MockTime {
fn format_time(&self, w: &mut dyn fmt::Write) -> fmt::Result {
write!(w, "fake time")
}
}
#[cfg(feature = "ansi")]
#[test]
fn with_ansi_true() {
let expected = "\u{1b}[2mfake time\u{1b}[0m \u{1b}[32m INFO\u{1b}[0m tracing_subscriber::fmt::format::test: hello\n";
test_ansi(true, expected);
}
#[cfg(feature = "ansi")]
#[test]
fn with_ansi_false() {
let expected = "fake time INFO tracing_subscriber::fmt::format::test: hello\n";
test_ansi(false, expected);
}
#[cfg(not(feature = "ansi"))]
#[test]
fn without_ansi() {
let make_writer = MockMakeWriter::default();
let expected = "fake time INFO tracing_subscriber::fmt::format::test: hello\n";
let subscriber = crate::fmt::Collector::builder()
.with_writer(make_writer)
.with_timer(MockTime);
run_test(subscriber, make_writer, expected);
}
#[test]
fn without_level() {
let make_writer = MockMakeWriter::default();
let subscriber = crate::fmt::Collector::builder()
.with_writer(make_writer.clone())
.with_level(false)
.with_ansi(false)
.with_timer(MockTime);
let expected = "fake time tracing_subscriber::fmt::format::test: hello\n";
run_test(subscriber, make_writer, expected);
}
#[cfg(feature = "ansi")]
fn test_ansi(is_ansi: bool, expected: &str) {
let make_writer = MockMakeWriter::default();
let subscriber = crate::fmt::Collector::builder()
.with_writer(make_writer.clone())
.with_ansi(is_ansi)
.with_timer(MockTime);
run_test(subscriber, make_writer, expected)
}
fn run_test(subscriber: impl Into<Dispatch>, buf: MockMakeWriter, expected: &str) {
let _default = set_default(&subscriber.into());
tracing::info!("hello");
assert_eq!(expected, buf.get_string())
}
#[test]
fn overridden_parents() {
let make_writer = MockMakeWriter::default();
let collector = crate::fmt::Collector::builder()
.with_writer(make_writer.clone())
.with_level(false)
.with_ansi(false)
.with_timer(MockTime)
.finish();
with_default(collector, || {
let span1 = tracing::info_span!("span1");
let span2 = tracing::info_span!(parent: &span1, "span2");
tracing::info!(parent: &span2, "hello");
});
assert_eq!(
"fake time span1:span2: tracing_subscriber::fmt::format::test: hello\n",
make_writer.get_string()
);
}
#[test]
fn overridden_parents_in_scope() {
let make_writer = MockMakeWriter::default();
let subscriber = crate::fmt::Collector::builder()
.with_writer(make_writer.clone())
.with_level(false)
.with_ansi(false)
.with_timer(MockTime)
.finish();
with_default(subscriber, || {
let span1 = tracing::info_span!("span1");
let span2 = tracing::info_span!(parent: &span1, "span2");
let span3 = tracing::info_span!("span3");
let _e3 = span3.enter();
tracing::info!("hello");
assert_eq!(
"fake time span3: tracing_subscriber::fmt::format::test: hello\n",
make_writer.get_string().as_str()
);
tracing::info!(parent: &span2, "hello");
assert_eq!(
"fake time span1:span2: tracing_subscriber::fmt::format::test: hello\n",
make_writer.get_string().as_str()
);
});
}
#[test]
fn format_nanos() {
fn fmt(t: u64) -> String {
TimingDisplay(t).to_string()
}
assert_eq!(fmt(1), "1.00ns");
assert_eq!(fmt(12), "12.0ns");
assert_eq!(fmt(123), "123ns");
assert_eq!(fmt(1234), "1.23µs");
assert_eq!(fmt(12345), "12.3µs");
assert_eq!(fmt(123456), "123µs");
assert_eq!(fmt(1234567), "1.23ms");
assert_eq!(fmt(12345678), "12.3ms");
assert_eq!(fmt(123456789), "123ms");
assert_eq!(fmt(1234567890), "1.23s");
assert_eq!(fmt(12345678901), "12.3s");
assert_eq!(fmt(123456789012), "123s");
assert_eq!(fmt(1234567890123), "1235s");
}
}
| 30.227049 | 129 | 0.551997 |
d68f4c9d23c701a33085a9ea03fbb6eb7e4ddfe2 | 20,287 | // Musium -- Music playback daemon with web-based library browser
// Copyright 2021 Ruud van Asseldonk
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// A copy of the License has been included in the root of the repository.
use std::fs;
use std::io;
use std::sync::Arc;
use std::thread;
use tiny_http::{Header, Request, Response, ResponseBox, Server};
use tiny_http::Method::{Get, Post, Put, self};
use crate::config::Config;
use crate::database::Database;
use crate::database;
use crate::mvar::Var;
use crate::player::{Millibel, Player};
use crate::prim::{ArtistId, AlbumId, TrackId};
use crate::scan::BackgroundScanner;
use crate::serialization;
use crate::string_utils::normalize_words;
use crate::systemd;
use crate::thumb_cache::ThumbCache;
use crate::{MetaIndex, MemoryMetaIndex};
fn header_content_type(content_type: &str) -> Header {
Header::from_bytes(&b"Content-Type"[..], content_type.as_bytes())
.expect("Failed to create content-type header, value is not ascii.")
}
fn header_expires_seconds(age_seconds: i64) -> Header {
let now = chrono::Utc::now();
let at = now.checked_add_signed(chrono::Duration::seconds(age_seconds)).unwrap();
// The format from https://tools.ietf.org/html/rfc7234#section-5.3.
let value = at.format("%a, %e %b %Y %H:%M:%S GMT").to_string();
Header::from_bytes(&b"Expires"[..], value)
.expect("Failed to create content-type header, value is not ascii.")
}
pub struct MetaServer {
config: Config,
index_var: Var<MemoryMetaIndex>,
thumb_cache_var: Var<ThumbCache>,
player: Player,
scanner: BackgroundScanner,
}
impl MetaServer {
pub fn new(
config: Config,
index_var: Var<MemoryMetaIndex>,
thumb_cache_var: Var<ThumbCache>,
player: Player,
) -> MetaServer {
MetaServer {
config: config,
index_var: index_var.clone(),
thumb_cache_var: thumb_cache_var.clone(),
player: player,
scanner: BackgroundScanner::new(
index_var,
thumb_cache_var,
),
}
}
fn handle_not_found(&self) -> ResponseBox {
Response::from_string("Not Found")
.with_status_code(404) // "404 Not Found"
.boxed()
}
fn handle_bad_request(&self, reason: &'static str) -> ResponseBox {
Response::from_string(reason)
.with_status_code(400) // "400 Bad Request"
.boxed()
}
fn handle_error(&self, reason: &'static str) -> ResponseBox {
Response::from_string(reason)
.with_status_code(500) // "500 Internal Server Error"
.boxed()
}
fn handle_static_file(&self, fname: &str, mime_type: &str) -> ResponseBox {
let file = match fs::File::open(fname) {
Ok(f) => f,
Err(..) => return self.handle_error("Failed to read static file."),
};
Response::from_file(file)
.with_header(header_content_type(mime_type))
.boxed()
}
fn handle_album_cover(&self, id: &str) -> ResponseBox {
let album_id = match AlbumId::parse(id) {
Some(aid) => aid,
None => return self.handle_bad_request("Invalid album id."),
};
let index = &*self.index_var.get();
let tracks = index.get_album_tracks(album_id);
let (_track_id, track) = tracks.first().expect("Albums have at least one track.");
let fname = index.get_filename(track.filename);
let opts = claxon::FlacReaderOptions {
metadata_only: true,
read_picture: claxon::ReadPicture::CoverAsVec,
read_vorbis_comment: false,
};
let reader = match claxon::FlacReader::open_ext(fname, opts) {
Ok(r) => r,
Err(..) => return self.handle_error("Failed to open flac file."),
};
if let Some(cover) = reader.into_pictures().pop() {
let content_type = header_content_type(&cover.mime_type);
let data = cover.into_vec();
Response::from_data(data)
.with_header(content_type)
.with_header(header_expires_seconds(3600 * 24 * 30))
.boxed()
} else {
// The file has no embedded front cover.
self.handle_not_found()
}
}
fn handle_thumb(&self, id: &str) -> ResponseBox {
// TODO: DRY this track id parsing and loading part.
let album_id = match AlbumId::parse(id) {
Some(aid) => aid,
None => return self.handle_bad_request("Invalid album id."),
};
let thumb_cache = self.thumb_cache_var.get();
let img = match thumb_cache.get(album_id) {
None => return self.handle_not_found(),
Some(bytes) => bytes,
};
Response::from_data(img)
.with_header(header_content_type("image/jpeg"))
.with_header(header_expires_seconds(3600 * 24 * 30))
.boxed()
}
fn handle_waveform(&self, db: &mut Database, id: &str) -> ResponseBox {
use crate::waveform::Waveform;
// TODO: DRY this track id parsing and loading part.
let track_id = match TrackId::parse(id) {
Some(tid) => tid,
None => return self.handle_bad_request("Invalid track id."),
};
let waveform = match db.select_track_waveform(track_id) {
Ok(Some(data)) => Waveform::from_bytes(data),
Ok(None) => return self.handle_not_found(),
Err(err) => {
eprintln!("Error while loading waveform: {:?}", err);
return self.handle_error("Database error.");
}
};
let mut svg = Vec::new();
waveform.write_svg(&mut svg).expect("Write to memory does not fail.");
Response::from_data(svg)
.with_header(header_content_type("image/svg+xml"))
.with_header(header_expires_seconds(3600 * 24 * 30))
.boxed()
}
fn handle_track(&self, path: &str) -> ResponseBox {
// Track urls are of the form `/track/f7c153f2b16dc101.flac`.
if !path.ends_with(".flac") {
return self.handle_bad_request("Expected a path ending in .flac.")
}
let id_part = &path[..path.len() - ".flac".len()];
let track_id = match TrackId::parse(id_part) {
Some(tid) => tid,
None => return self.handle_bad_request("Invalid track id."),
};
let index = &*self.index_var.get();
let track = match index.get_track(track_id) {
Some(t) => t,
None => return self.handle_not_found(),
};
let fname = index.get_filename(track.filename);
// TODO: Rather than reading the file into memory in userspace,
// use sendfile.
// TODO: Handle requests with Range header.
let file = match fs::File::open(fname) {
Ok(f) => f,
Err(_) => return self.handle_error("Failed to open file."),
};
Response::from_file(file)
.with_header(header_content_type("audio/flac"))
.boxed()
}
fn handle_album(&self, id: &str) -> ResponseBox {
let album_id = match AlbumId::parse(id) {
Some(aid) => aid,
None => return self.handle_bad_request("Invalid album id."),
};
let index = &*self.index_var.get();
let album = match index.get_album(album_id) {
Some(a) => a,
None => return self.handle_not_found(),
};
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
serialization::write_album_json(index, &mut w, album_id, album).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_artist(&self, id: &str) -> ResponseBox {
let artist_id = match ArtistId::parse(id) {
Some(aid) => aid,
None => return self.handle_bad_request("Invalid artist id."),
};
let index = &*self.index_var.get();
let artist = match index.get_artist(artist_id) {
Some(a) => a,
None => return self.handle_not_found(),
};
let albums = index.get_albums_by_artist(artist_id);
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
serialization::write_artist_json(index, &mut w, artist, albums).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_albums(&self) -> ResponseBox {
let index = &*self.index_var.get();
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
serialization::write_albums_json(index, &mut w).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_queue(&self) -> ResponseBox {
let index = &*self.index_var.get();
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
let queue = self.player.get_queue();
serialization::write_queue_json(
index,
&mut w,
&queue.tracks[..],
).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_enqueue(&self, id: &str) -> ResponseBox {
let track_id = match TrackId::parse(id) {
Some(tid) => tid,
None => return self.handle_bad_request("Invalid track id."),
};
let index = &*self.index_var.get();
// Confirm that the track exists before we enqueue it.
let _track = match index.get_track(track_id) {
Some(t) => t,
None => return self.handle_not_found(),
};
let queue_id = self.player.enqueue(index, track_id);
let queue_id_json = format!(r#""{}""#, queue_id);
Response::from_string(queue_id_json)
.with_status_code(201) // "201 Created"
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_get_volume(&self) -> ResponseBox {
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
let volume = self.player.get_volume();
serialization::write_volume_json(&mut w, volume).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_change_volume(&self, add: Millibel) -> ResponseBox {
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
let volume = self.player.change_volume(add);
serialization::write_volume_json(&mut w, volume).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_search(&self, raw_query: &str) -> ResponseBox {
let mut opt_query = None;
for (k, v) in url::form_urlencoded::parse(raw_query.as_bytes()) {
if k == "q" {
opt_query = Some(v);
}
};
let query = match opt_query {
Some(q) => q,
None => return self.handle_bad_request("Missing search query."),
};
let mut words = Vec::new();
normalize_words(query.as_ref(), &mut words);
let mut artists = Vec::new();
let mut albums = Vec::new();
let mut tracks = Vec::new();
let index = &*self.index_var.get();
index.search_artist(&words[..], &mut artists);
index.search_album(&words[..], &mut albums);
index.search_track(&words[..], &mut tracks);
// Cap the number of search results we serve. We can easily produce many
// many results (especially when searching for "t", a prefix of "the",
// or when searching "a"). Searching is quite fast, but parsing and
// rendering the results in the frontend is slow, and having this many
// results is not useful anyway, so we cap them.
let n_artists = artists.len().min(250);
let n_albums = albums.len().min(250);
let n_tracks = tracks.len().min(250);
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
serialization::write_search_results_json(
index,
&mut w,
&artists[..n_artists],
&albums[..n_albums],
&tracks[..n_tracks],
).unwrap();
Response::from_data(w.into_inner())
.with_status_code(200)
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_get_scan_status(&self) -> ResponseBox {
// TODO: We could add a long polling query parameter here, and version
// the status. Then in the request, include the previous version. If the
// current version is newer, respond immediately. If not, block for some
// time to wait for a new status, then return the current status. That
// way, we could make extremely responsive status updates.
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
let status = self.scanner.get_status();
serialization::write_scan_status_json(&mut w, status).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_start_scan(&self) -> ResponseBox {
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
let status = self.scanner.start(self.config.clone());
serialization::write_scan_status_json(&mut w, Some(status)).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
fn handle_stats(&self) -> ResponseBox {
let index = &*self.index_var.get();
let buffer = Vec::new();
let mut w = io::Cursor::new(buffer);
serialization::write_stats_json(index, &mut w).unwrap();
Response::from_data(w.into_inner())
.with_header(header_content_type("application/json"))
.boxed()
}
/// Router function for all /api/«endpoint» calls.
fn handle_api_request(
&self,
db: &mut Database,
method: &Method,
endpoint: &str,
arg: Option<&str>,
query: &str,
) -> ResponseBox {
match (method, endpoint, arg) {
// API endpoints.
(&Get, "cover", Some(t)) => self.handle_album_cover(t),
(&Get, "thumb", Some(t)) => self.handle_thumb(t),
(&Get, "waveform", Some(t)) => self.handle_waveform(db, t),
(&Get, "track", Some(t)) => self.handle_track(t),
(&Get, "album", Some(a)) => self.handle_album(a),
(&Get, "artist", Some(a)) => self.handle_artist(a),
(&Get, "albums", None) => self.handle_albums(),
(&Get, "search", None) => self.handle_search(query),
(&Get, "stats", None) => self.handle_stats(),
// Play queue manipulation.
(&Get, "queue", None) => self.handle_queue(),
(&Put, "queue", Some(t)) => self.handle_enqueue(t),
// Volume control, volume up/down change the volume by 1 dB.
(&Get, "volume", None) => self.handle_get_volume(),
(&Post, "volume", Some("up")) => self.handle_change_volume(Millibel( 1_00)),
(&Post, "volume", Some("down")) => self.handle_change_volume(Millibel(-1_00)),
// Background library scanning.
(&Get, "scan", Some("status")) => self.handle_get_scan_status(),
(&Post, "scan", Some("start")) => self.handle_start_scan(),
_ => self.handle_bad_request("No such (method, endpoint, argument) combination."),
}
}
fn handle_request(&self, db: &mut Database, request: Request) {
// Break url into the part before the ? and the part after. The part
// before we split on slashes.
let mut url_iter = request.url().splitn(2, '?');
// The individual parts in between the slashes.
let mut p0 = None;
let mut p1 = None;
let mut p2 = None;
if let Some(base) = url_iter.next() {
let mut parts = base.splitn(4, '/').filter(|x| x.len() > 0);
p0 = parts.next();
p1 = parts.next();
p2 = parts.next();
}
let query = url_iter.next().unwrap_or("");
// A very basic router. See also docs/api.md for an overview.
let response = match (request.method(), p0, p1) {
// API endpoints go through the API router, to keep this match arm
// a bit more concise.
(method, Some("api"), Some(endpoint)) => self.handle_api_request(db, method, endpoint, p2, query),
// Web endpoints.
(&Get, None, None) => self.handle_static_file("app/index.html", "text/html"),
(&Get, Some("style.css"), None) => self.handle_static_file("app/style.css", "text/css"),
(&Get, Some("dark.css"), None) => self.handle_static_file("app/dark.css", "text/css"),
(&Get, Some("manifest.json"), None) => self.handle_static_file("app/manifest.json", "text/javascript"),
(&Get, Some("app.js"), None) => self.handle_static_file("app/output/app.js", "text/javascript"),
(&Get, Some(path), None) if path.ends_with(".svg") => {
let mut file_path = "app/".to_string();
file_path.push_str(path);
self.handle_static_file(&file_path, "image/svg+xml")
}
// Fallback.
(&Get, _, _) => self.handle_not_found(),
_ => self.handle_bad_request("Expected a GET request."),
};
match request.respond(response) {
Ok(()) => {},
Err(err) => println!("Error while responding to request: {:?}", err),
}
}
}
pub fn serve(bind: &str, service: Arc<MetaServer>) -> ! {
let server = match Server::http(bind) {
Ok(s) => s,
Err(..) => {
eprintln!("Failed to start server, could not bind to {}.", bind);
std::process::exit(1);
}
};
let server = Arc::new(server);
// Browsers do not make more than 8 requests in parallel, so having more
// handler threads is not useful; I expect only a single user to be
// browsing at a time.
let n_threads = 8;
let mut threads = Vec::with_capacity(n_threads);
for i in 0..n_threads {
let server_i = server.clone();
let service_i = service.clone();
let name = format!("http_server_{}", i);
let builder = thread::Builder::new().name(name);
let join_handle = builder.spawn(move || {
let connection = database::connect_readonly(service_i.config.db_path())
.expect("Failed to connect to database.");
let mut db = Database::new(&connection)
.expect("Failed to initialize database.");
loop {
let request = match server_i.recv() {
Ok(rq) => rq,
Err(e) => {
println!("Error: {:?}", e);
break;
}
};
service_i.handle_request(&mut db, request);
}
}).unwrap();
threads.push(join_handle);
}
// When running under systemd, the service is ready when the server is
// accepting connections, which is now.
systemd::notify_ready_if_can_notify();
// Block until the server threads exit, which will not happen.
for handle in threads {
handle.join().unwrap();
}
// This code is unreachable, but serves to satisfy the typechecker.
loop {}
}
| 36.885455 | 115 | 0.567063 |
e9f26cb39169caa782a8bd6b96c353a68a6befe5 | 279 | // compile-flags: -Z parse-only
// ignore-tidy-linelength
struct Foo;
impl Foo {
fn foo() {}
#[stable(feature = "rust1", since = "1.0.0")]
} //~ ERROR expected one of `async`, `const`, `crate`, `default`, `existential`, `extern`, `fn`, `pub`, `type`, or
fn main() {}
| 21.461538 | 114 | 0.584229 |
8a2f83e83d9cca779c0cd8e02eeab8db2661fd6c | 65 | pub mod rpm_tape;
pub mod rpm_tape_box;
pub mod rpm_tape_tracks;
| 16.25 | 24 | 0.815385 |
8ab384f8e324f0187ce30718a16aded1ddf16719 | 440 | pub fn part1(_input: &str) -> ! {
todo!()
}
pub fn part2(_input: &str) -> ! {
todo!()
}
#[allow(unreachable_code)]
#[cfg(test)]
#[test]
fn part1_test() {
assert_eq!(
part1(&std::fs::read_to_string("input/day07.txt").unwrap()),
()
);
}
#[allow(unreachable_code)]
#[cfg(test)]
#[test]
fn part2_test() {
assert_eq!(
part2(&std::fs::read_to_string("input/day07.txt").unwrap()),
()
);
}
| 16.296296 | 68 | 0.540909 |
Description: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Rust, a popular statically typed language.
Target Language: Rust
Dataset Size:
Preprocessing:
Tokenizer: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
Training Sequences: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning).