hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f4726dd5be92bb06abd8e27fa35ffdbf15e0de04 | 1,679 | md | Markdown | docs/framework/configure-apps/file-schema/windows-workflow-foundation/bufferreceive.md | wangshuai-007/docs.zh-cn | b0f422093b4d46695643d4ef95ea85c51ef3c702 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/windows-workflow-foundation/bufferreceive.md | wangshuai-007/docs.zh-cn | b0f422093b4d46695643d4ef95ea85c51ef3c702 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/windows-workflow-foundation/bufferreceive.md | wangshuai-007/docs.zh-cn | b0f422093b4d46695643d4ef95ea85c51ef3c702 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '<bufferReceive>'
ms.custom:
ms.date: 03/30/2017
ms.prod: .net-framework
ms.reviewer:
ms.suite:
ms.tgt_pltfrm:
ms.topic: reference
ms.assetid: b23c3a54-10d4-4f13-ab6d-98b26b76f22a
caps.latest.revision: "3"
author: dotnet-bot
ms.author: dotnetcontent
manager: wpickett
ms.openlocfilehash: e0fc3fb901e0f70b616f403b98abc7b9645b2b44
ms.sourcegitcommit: ce279f2d7fe2220e6ea0a25a8a7a5370ddf8d9f0
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 12/02/2017
---
# <a name="ltbufferreceivegt"></a><bufferReceive>
一种服务行为,允许服务使用缓冲接收处理,以使工作流服务能够处理无序消息。
\<系统。ServiceModel >
\<行为 >
\<serviceBehaviors >
\<行为 >
\<bufferReceive >
## <a name="syntax"></a>语法
```xml
<behaviors>
<serviceBehaviors>
<behavior name="String">
<bufferReceive maxPendingMessagesPerChannel="Integer" />
</behavior>
</serviceBehaviors>
</behaviors>
```
## <a name="attributes-and-elements"></a>特性和元素
下列各节描述了特性、子元素和父元素。
### <a name="attributes"></a>特性
|特性|描述|
|---------------|-----------------|
|maxPendingMessagesPerChannel|一个整数,指定每个通道允许的最大挂起消息数。 默认值为 512。 此属性限制工作流服务可接收的无序消息数。|
### <a name="child-elements"></a>子元素
无。
### <a name="parent-elements"></a>父元素
|元素|描述|
|-------------|-----------------|
|[\<行为 > 的\<serviceBehaviors >](../../../../../docs/framework/configure-apps/file-schema/windows-workflow-foundation/behavior-of-servicebehaviors-of-workflow.md)|指定行为元素。|
## <a name="see-also"></a>另请参阅
<!-- <xref:System.ServiceModel.Activities.Description.BufferReceiveServiceBehavior> -->
<xref:System.ServiceModel.Activities.Configuration.BufferedReceiveElement>
| 26.650794 | 172 | 0.670042 | yue_Hant | 0.357998 |
f4727f40fab3d4ddab53a21c13db020bd5679bc8 | 1,992 | md | Markdown | docs/framework/wpf/app-development/how-to-determine-if-a-page-is-browser-hosted.md | mitharp/docs.ru-ru | 6ca27c46e446ac399747e85952a3135193560cd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/app-development/how-to-determine-if-a-page-is-browser-hosted.md | mitharp/docs.ru-ru | 6ca27c46e446ac399747e85952a3135193560cd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/app-development/how-to-determine-if-a-page-is-browser-hosted.md | mitharp/docs.ru-ru | 6ca27c46e446ac399747e85952a3135193560cd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Как выполнить Определить страницу в браузере
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- hosted pages in browser [WPF]
- pages [WPF], hosted in browser
ms.assetid: 737e0f26-8371-49b4-9579-70879e51e1aa
ms.openlocfilehash: aa2aa36e4f887c4fa02314f7834e2a46268c8ff9
ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 01/23/2019
ms.locfileid: "54661299"
---
# <a name="how-to-determine-if-a-page-is-browser-hosted"></a>Как выполнить Определить страницу в браузере
В этом примере показано, как определить, если <xref:System.Windows.Controls.Page> размещается в браузере.
## <a name="example"></a>Пример
Объект <xref:System.Windows.Controls.Page> может размещаться независимо и, следовательно, может быть загружена в несколько различных типов узлов, включая <xref:System.Windows.Controls.Frame>, <xref:System.Windows.Navigation.NavigationWindow>, или другой браузер. Это может произойти, если у вас есть библиотечная сборка, содержащий одну или несколько страниц, на который ссылаются несколько автономных и отображаемых в обозревателе ([!INCLUDE[TLA#tla_xbap](../../../../includes/tlasharptla-xbap-md.md)]) размещения приложений.
Следующий пример демонстрирует, как использовать <xref:System.Windows.Interop.BrowserInteropHelper.IsBrowserHosted%2A?displayProperty=nameWithType> на предмет <xref:System.Windows.Controls.Page> размещается в браузере.
[!code-csharp[HOWTOBrowserInteropHelperSnippets#IsBrowserHostedCODE](../../../../samples/snippets/csharp/VS_Snippets_Wpf/HOWTOBrowserInteropHelperSnippets/CSharp/Page1.xaml.cs#isbrowserhostedcode)]
[!code-vb[HOWTOBrowserInteropHelperSnippets#IsBrowserHostedCODE](../../../../samples/snippets/visualbasic/VS_Snippets_Wpf/HOWTOBrowserInteropHelperSnippets/visualbasic/page1.xaml.vb#isbrowserhostedcode)]
## <a name="see-also"></a>См. также
- <xref:System.Windows.Controls.Frame>
- <xref:System.Windows.Controls.Page>
| 62.25 | 529 | 0.799699 | yue_Hant | 0.24996 |
f472c323f4269545d637319c210cf256b2fae5fd | 2,755 | md | Markdown | README.md | Arkanii/git-branch-cli | 15811b7cfe1f60f3e0f9f8a4556cd6f3ae2f4eb4 | [
"MIT"
] | 15 | 2021-11-08T13:57:22.000Z | 2021-11-24T12:15:32.000Z | README.md | Arkanii/git-branch-cli | 15811b7cfe1f60f3e0f9f8a4556cd6f3ae2f4eb4 | [
"MIT"
] | null | null | null | README.md | Arkanii/git-branch-cli | 15811b7cfe1f60f3e0f9f8a4556cd6f3ae2f4eb4 | [
"MIT"
] | null | null | null | # Git Branch CLI
[![npm version](https://img.shields.io/npm/v/git-branch-cli.svg?style=flat-square)](https://www.npmjs.com/package/git-branch-cli)
[![npm downloads](https://img.shields.io/npm/dt/git-branch-cli.svg?style=flat-square)](https://www.npmjs.com/package/git-branch-cli)
[![gitmoji badge](https://img.shields.io/badge/gitmoji-%20😜%20😍-FFDD67.svg?style=flat-square)](https://github.com/carloscuesta/gitmoji)
> A client for creating git branch easily using interactive questions.
![git_branch_cli_demo](demo.gif)
## Features
- Easy to use
- Interactive prompt
- Fully configurable
## Install
### For global use :
```shell
$ yarn global add git-branch-cli
```
Or
```shell
$ npm install -g git-branch-cli
```
### For local use :
```shell
$ yarn add git-branch-cli
```
Or
```shell
$ npm install git-branch-cli
```
Add to `package.json` file :
```json
"scripts": {
"git-branch": "git-branch"
}
```
## Usage
### If installed globally :
```shell
git-branch
```
### If installed locally :
```shell
yarn run git-branch
```
or
```shell
npm run git-branch
```
---
```
A client for creating git branch easily using interactive questions.
Usage
$ git-branch
Options
--init, -i Initialize a new branch
--version, -v Print git-branch-cli installed version
```
## Configuration
The configuration for using git-branch-cli should be located in the `package.json` file.
Example :
```json
"git-branch-cli": [
{
"name": "type",
"message": "What type is your branch ?",
"type": "autocomplete",
"choices": [
"feature",
"bug"
],
"characterAfter": "/"
},
{
"name": "name",
"message": "What is the name of the feature ?",
"characterAfter": "-"
},
{
"name": "ticket",
"message": "What is the ticket number ?",
"type": "number"
}
]
```
#### name (required)
Type: `string`
Only for understanding the configuration
#### message (required)
Type: `string`
The question that will be displayed
##### type
Type: `string`
Type of the answer.
Defaults: `input` - Possible values: `input`, `number`, `autocomplete`.
##### choices (required if `type` field is `autocomplete`)
Type: `array`
The different choices for the user
#### characterAfter
Type: `char`
The character that will be positioned after the user's slugified answer
#### required
Type: `bool`
Determine if user's answer can be blank.
If this field is `false` and the user's answer is blank, the `characterAfter` value isn't add to branch name.
Defaults: `true`
## Credits
Created by Théo Frison, inspired and taken for some parts from [this project](https://github.com/carloscuesta/gitmoji-cli).
| 17.88961 | 135 | 0.642831 | eng_Latn | 0.909657 |
f472dc1281806a112d352d5e616ec557324c6d7e | 3,344 | md | Markdown | README.md | mailtrack-io/snowplow-php-tracker | 0dfc376b71708c93d6bb0c501e71faf1636dc88c | [
"Apache-2.0"
] | null | null | null | README.md | mailtrack-io/snowplow-php-tracker | 0dfc376b71708c93d6bb0c501e71faf1636dc88c | [
"Apache-2.0"
] | null | null | null | README.md | mailtrack-io/snowplow-php-tracker | 0dfc376b71708c93d6bb0c501e71faf1636dc88c | [
"Apache-2.0"
] | null | null | null | PHP Analytics for Snowplow
==========================
[![Build Status][travis-image]][travis]
[![Coverage Status][coveralls-image]][coveralls]
[![Latest Stable Version][packagist-image-1]][packagist-1]
[![Total Downloads][packagist-image-2]][packagist-2]
[![Dependency Status][versioneye-image]][versioneye]
##Overview
Add analytics into your PHP apps and scripts with the **[Snowplow][1]** event tracker for **[PHP][2]**.
With this tracker you can collect event data from your PHP based applications, games and frameworks.
## Find out more
| Technical Docs | Setup Guide | Roadmap | Contributing |
|---------------------------------|---------------------------|-------------------------|-----------------------------------|
| ![i1] [techdocs-image] | ![i2] [setup-image] | ![i3] [roadmap-image] | ![i4] [contributing-image] |
| **[Technical Docs] [techdocs]** | **[Setup Guide] [setup]** | **[Roadmap] [roadmap]** | **[Contributing] [contributing]** |
## Run tests
* Clone this repo
* run `cd <project_path>`
* run `docker-compose run --rm snowplow composer.phar install`
* run `docker-compose run --rm snowplow script/tests.sh`
## Copyright and license
The Snowplow PHP Tracker is copyright 2014 Snowplow Analytics Ltd.
Licensed under the **[Apache License, Version 2.0] [license]** (the "License");
you may not use this software except in compliance with the License.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
[1]: http://snowplowanalytics.com/
[2]: http://php.net/
[travis]: https://travis-ci.org/snowplow/snowplow-php-tracker
[travis-image]: https://travis-ci.org/snowplow/snowplow-php-tracker.svg?branch=master
[coveralls]: https://coveralls.io/r/snowplow/snowplow-php-tracker?branch=master
[coveralls-image]: https://coveralls.io/repos/snowplow/snowplow-php-tracker/badge.png?branch=master
[versioneye]: https://www.versioneye.com/user/projects/542ac2c1fc3f5c175f000035
[versioneye-image]: https://www.versioneye.com/user/projects/542ac2c1fc3f5c175f000035/badge.svg?style=flat
[packagist-1]: https://packagist.org/packages/snowplow/snowplow-tracker
[packagist-image-1]: https://poser.pugx.org/snowplow/snowplow-tracker/v/stable.png
[packagist-2]: https://packagist.org/packages/snowplow/snowplow-tracker
[packagist-image-2]: https://poser.pugx.org/snowplow/snowplow-tracker/downloads.png
[techdocs-image]: https://d3i6fms1cm1j0i.cloudfront.net/github/images/techdocs.png
[setup-image]: https://d3i6fms1cm1j0i.cloudfront.net/github/images/setup.png
[roadmap-image]: https://d3i6fms1cm1j0i.cloudfront.net/github/images/roadmap.png
[contributing-image]: https://d3i6fms1cm1j0i.cloudfront.net/github/images/contributing.png
[techdocs]: https://github.com/snowplow/snowplow/wiki/PHP-Tracker
[setup]: https://github.com/snowplow/snowplow/wiki/PHP-Tracker-Setup
[roadmap]: https://github.com/snowplow/snowplow/wiki/PHP-Tracker-Roadmap
[contributing]: https://github.com/snowplow/snowplow/wiki/PHP-Tracker-Contributing
[license]: http://www.apache.org/licenses/LICENSE-2.0
| 49.910448 | 125 | 0.709629 | yue_Hant | 0.393972 |
f4733a564306ec87736d2774cc3d45804adbf192 | 719 | md | Markdown | docs/assembler/masm/ml-warning-a4004.md | FlorianHaupt/cpp-docs.de-de | 0ccd351cf627f1a41b14514c746f729b778d87e3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/assembler/masm/ml-warning-a4004.md | FlorianHaupt/cpp-docs.de-de | 0ccd351cf627f1a41b14514c746f729b778d87e3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/assembler/masm/ml-warning-a4004.md | FlorianHaupt/cpp-docs.de-de | 0ccd351cf627f1a41b14514c746f729b778d87e3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ML-Warnung A4004
ms.date: 08/30/2018
ms.topic: error-reference
f1_keywords:
- A4004
helpviewer_keywords:
- A4004
ms.assetid: f11b13c9-fa8d-49f2-b816-a6b7871c7261
ms.openlocfilehash: c413847f05afd5baecf5a891903be502fa7ca840
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 10/31/2018
ms.locfileid: "50592685"
---
# <a name="ml-warning-a4004"></a>ML-Warnung A4004
**kann nicht davon ausgehen, CS**
Es wurde versucht, einen Wert für die CS-Register anzunehmen. CS ist immer auf das aktuelle Segment oder die Gruppe festgelegt.
## <a name="see-also"></a>Siehe auch
[ML-Fehlermeldungen](../../assembler/masm/ml-error-messages.md)<br/> | 28.76 | 127 | 0.77886 | deu_Latn | 0.673345 |
f4736944d681077e2bc913864bdd8a2c5887560d | 6,364 | md | Markdown | WindowsServerDocs/virtualization/hyper-v/get-started/Install-the-Hyper-V-role-on-Windows-Server.md | baleng/windowsserverdocs.it-it | 8896bd5cff0154bf6b57c7de33efd7e65710947a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/virtualization/hyper-v/get-started/Install-the-Hyper-V-role-on-Windows-Server.md | baleng/windowsserverdocs.it-it | 8896bd5cff0154bf6b57c7de33efd7e65710947a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/virtualization/hyper-v/get-started/Install-the-Hyper-V-role-on-Windows-Server.md | baleng/windowsserverdocs.it-it | 8896bd5cff0154bf6b57c7de33efd7e65710947a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Installare il ruolo Hyper-V in Windows Server
description: Vengono fornite istruzioni per l'installazione di Hyper-V tramite Server Manager o Windows PowerShell
ms.prod: windows-server
ms.service: na
manager: dongill
ms.technology: compute-hyper-v
ms.tgt_pltfrm: na
ms.topic: get-started-article
ms.assetid: 8e871317-09d2-4314-a6ec-ced12b7aee89
author: KBDAzure
ms.author: kathydav
ms.date: 12/02/2016
ms.openlocfilehash: 2687a907852e2a81f03b147df1425cd01b34fb76
ms.sourcegitcommit: 6aff3d88ff22ea141a6ea6572a5ad8dd6321f199
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 09/27/2019
ms.locfileid: "71392807"
---
# <a name="install-the-hyper-v-role-on-windows-server"></a>Installare il ruolo Hyper-V in Windows Server
>Si applica a: Windows Server 2016, Windows Server 2019
Per creare ed eseguire macchine virtuali, installare il ruolo Hyper-V in Windows Server usando Server Manager o il cmdlet **Install-WindowsFeature** in Windows PowerShell. Per Windows 10, vedere [installare Hyper-V in Windows 10](https://docs.microsoft.com/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v).
Per ulteriori informazioni su Hyper-V, vedere [Panoramica della tecnologia Hyper-v](../Hyper-V-Technology-Overview.md). Per provare Windows Server 2019, è possibile scaricare e installare una copia di valutazione. Vedere il [centro di valutazione](https://www.microsoft.com/evalcenter/evaluate-windows-server-2019).
Prima di installare Windows Server o aggiungere il ruolo Hyper-V, verificare che:
- L'hardware del computer è compatibile. Per informazioni dettagliate, vedere [requisiti di sistema per Windows Server](../../../get-started/System-Requirements.md) e [requisiti di sistema per Hyper-V in Windows Server](../System-requirements-for-Hyper-V-on-Windows.md).
- Non si prevede di usare app di virtualizzazione di terze parti che si basano sulle stesse funzionalità del processore richieste da Hyper-V. Gli esempi includono VMWare Workstation e VirtualBox. È possibile installare Hyper-V senza disinstallare le altre app. Tuttavia, se si tenta di usarli per gestire le macchine virtuali quando l'hypervisor Hyper-V è in esecuzione, è possibile che le macchine virtuali non vengano avviate o non siano affidabili. Per informazioni dettagliate e istruzioni per disattivare l'hypervisor Hyper-V se è necessario usare una di queste app, vedere la pagina relativa alle [applicazioni di virtualizzazione che non funzionano insieme a Hyper-v, Device Guard e Credential Guard](https://support.microsoft.com/help/3204980/virtualization-applications-do-not-work-together-with-hyper-v-device-g).
Se si desidera installare solo gli strumenti di gestione, ad esempio la console di gestione di Hyper-V, vedere [gestire in remoto gli host Hyper-v con la console di gestione di Hyper-v](../Manage/Remotely-manage-Hyper-V-hosts.md).
## <a name="install-hyper-v-by-using-server-manager"></a>Installare Hyper-V usando Server Manager
1. In **Server Manager**, via il **Gestisci** menu, fare clic su **Aggiungi ruoli e funzionalità**.
2. Nella pagina **Prima di iniziare** verificare che il server di destinazione e l'ambiente di rete siano preparati per il ruolo e la funzionalità che si desidera installare. Fare clic su **Avanti**.
3. Nella pagina **Selezione tipo di installazione** selezionare **Installazione basata su ruoli o basata su funzionalità** e quindi fare clic su **Avanti**.
4. Nella pagina **Selezione server di destinazione** selezionare un server dal pool di server e quindi fare clic su **Avanti**.
5. Nel **Selezione ruoli server** selezionare **Hyper-V**.
6. Per aggiungere gli strumenti che consentono di creare e gestire macchine virtuali, fare clic su **Aggiungi funzionalità**. Nella pagina funzionalità, fare clic su **Avanti**.
7. Nel **Crea commutatori virtuali** pagina **migrazione della macchina virtuale** pagina e **archivi predefiniti** pagina, selezionare le opzioni appropriate.
8. Nel **Conferma selezioni per l'installazione** selezionare **Riavvia automaticamente il server di destinazione se necessario**, quindi fare clic su **installare**.
9. Al termine dell'installazione, verificare che Hyper-V sia installato correttamente. Aprire il **tutti i server** in Server Manager e selezionare un server in cui è installato Hyper-V. Controllare il **ruoli e funzionalità** riquadro della pagina per il server selezionato.
## <a name="install-hyper-v-by-using-the-install-windowsfeature-cmdlet"></a>Installare Hyper-V usando il cmdlet Install-WindowsFeature
1. Sul desktop di Windows fare clic sul pulsante Start e digitare qualsiasi parte del nome **Windows PowerShell**.
2. Fare doppio clic su Windows PowerShell e selezionare **Esegui come amministratore**.
3. Per installare Hyper-V in un server ci si connette in remoto, eseguire il comando seguente e sostituire `<computer_name>` con il nome del server.
```powershell
Install-WindowsFeature -Name Hyper-V -ComputerName <computer_name> -IncludeManagementTools -Restart
```
Se si è connessi localmente al server, eseguire il comando senza `-ComputerName <computer_name>`.
4. Dopo il riavvio del server, è possibile osservare che il ruolo Hyper-V è installato e vedere quali altri ruoli e funzionalità sono installati eseguendo il comando seguente:
```powershell
Get-WindowsFeature -ComputerName <computer_name>
```
Se si è connessi localmente al server, eseguire il comando senza `-ComputerName <computer_name>`.
> [!NOTE]
> Se si installa questo ruolo in un server che esegue l'opzione di installazione dei componenti di base del server di Windows Server 2016 e si utilizza il parametro `-IncludeManagementTools`, viene installato solo il modulo Hyper-V per Windows PowerShell. È possibile utilizzare lo strumento di gestione GUI, gestione di Hyper-V in un altro computer per la gestione remota di un host Hyper-V che viene eseguito in un'installazione Server Core. Per istruzioni sulla connessione remota, vedere [gestire in remoto gli host Hyper-v con la console di gestione di Hyper-v](../Manage/Remotely-manage-Hyper-V-hosts.md).
## <a name="see-also"></a>Vedere anche
- [Install-WindowsFeature](https://docs.microsoft.com/powershell/module/Microsoft.Windows.ServerManager.Migration/Install-WindowsFeature)
| 76.674699 | 823 | 0.777184 | ita_Latn | 0.995008 |
f4737855dca805bba8828fe3eb97d69dda22f546 | 2,652 | md | Markdown | _users/16-deployment.md | hiocs/ServiceComb.github.io | b26bf6bc38467d8f4a1fa4c5e469a725e6532a7f | [
"Apache-2.0"
] | null | null | null | _users/16-deployment.md | hiocs/ServiceComb.github.io | b26bf6bc38467d8f4a1fa4c5e469a725e6532a7f | [
"Apache-2.0"
] | null | null | null | _users/16-deployment.md | hiocs/ServiceComb.github.io | b26bf6bc38467d8f4a1fa4c5e469a725e6532a7f | [
"Apache-2.0"
] | null | null | null | ---
title: "部署运行"
lang: en
ref: deployment
permalink: /users/deployment/
excerpt: "部署运行."
last_modified_at: 2017-06-06T10:01:43-04:00
redirect_from:
- /theme-setup/
---
{% include toc %}
微服务框架当前提供了两种部署运行模式:standalone模式和web容器模式。推荐使用standalone模式拉起服务进程。
## standalone模式
以下软件需要被安装:
微服务框架提供了standalone部署运行模式,可直接在本地通过Main函数拉起。
编写Main函数,初始化日志和加载服务配置,内容如下。
```java
import io.servicecomb.foundation.common.utils.BeanUtils;
import io.servicecomb.foundation.common.utils.Log4jUtils;
public class MainServer {
public static void main(String[] args) throws Exception {
Log4jUtils.init(); # 日志初始化
BeanUtils.init(); # Spring bean初始化
...
}
}
```
**Note:**如果使用的是rest网络通道,需要将pom中的transport改为使用transport-rest-vertx包
{: .notice--warning}
运行MainServer即可启动该微服务进程,向外暴露服务。
## web容器模式
如果需要将该微服务加载到web容器中启动运行时,需要新建一个servlet工程包装一下。
### web.xml文件配置
```xml
<web-app>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>
classpath*:META-INF/spring/*.bean.xml
classpath*:app-config.xml
</param-value>
</context-param>
<listener>
<listener-class>io.servicecomb.transport.rest.servlet.RestServletContextListener</listener-class>
</listener>
<servlet>
<servlet-name>RestServlet</servlet-name>
<servlet-class>io.servicecomb.transport.rest.servlet.RestServlet</servlet-class>
<load-on-startup>1</load-on-startup>
<async-supported>true</async-supported>
</servlet>
<servlet-mapping>
<servlet-name>RestServlet</servlet-name>
<url-pattern>/rest/*</url-pattern>
</servlet-mapping>
</web-app>
```
RestServletContextListener用于初始化微服务环境,包括日志、spring等。
需要指定spring配置文件加载路径,通过context-param来配置。 上例中,classpath*:META-INF/spring/*.bean.xml是微服务的内置规则,如果配置中未包含这个路径,则RestServletContextListener内部会将它追加在最后。
另外可以配置servlet-mapping中的url-pattern规则将满足指定规则的URL路由到该servlet中来。
### pom.xml文件配置
当前只有Rest网络通道支持此种运行模式,使用web容器模式时需要将pom文件中的transport改为依赖transport-rest-servlet包。
设置finalName,是方便部署,有这一项后,maven打出来的war包,部署到web容器中,webroot即是这个finalName。
```xml
<dependencies>
<dependency>
<groupId>io.servicecomb</groupId>
<artifactId>transport-rest-servlet</artifactId>
</dependency>
...
</dependencies>
<build>
<finalName>test</finalName>
</build>
```
**Note:**1.RESTful调用应该与web容器中其他静态资源调用(比如html、js等等)隔离开来,所以webroot后一段应该还有一层关键字,比如上面web.xml中举的例子(/test/rest)中的rest。2.以tomcat为例,默认每个war包都有不同的webroot,这个webroot需要是basePath的前缀,比如webroot为test,则该微服务所有的契约都必须以/test打头。3.当微服务加载在web容器中,并直接使用web容器开的http、https端口时,因为是使用的web容器的通信通道,所以需要满足web容器的规则。
{: .notice--warning}
| 25.5 | 280 | 0.732278 | yue_Hant | 0.259928 |
f473d197dae6b369735f854f3f00fea2c786c079 | 11,469 | md | Markdown | writeup_template.md | tsengph/CarND-4-Behavioral-Cloning | bb3240c5f4af723499aeef972ccf353c756e4463 | [
"MIT"
] | null | null | null | writeup_template.md | tsengph/CarND-4-Behavioral-Cloning | bb3240c5f4af723499aeef972ccf353c756e4463 | [
"MIT"
] | null | null | null | writeup_template.md | tsengph/CarND-4-Behavioral-Cloning | bb3240c5f4af723499aeef972ccf353c756e4463 | [
"MIT"
] | null | null | null | # **Behavioral Cloning**
## Writeup Template
### You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer.
---
**Behavioral Cloning Project**
The goals / steps of this project are the following:
* Use the simulator to collect data of good driving behavior
* Build, a convolution neural network in Keras that predicts steering angles from images
* Train and validate the model with a training and validation set
* Test that the model successfully drives around track one without leaving the road
* Summarize the results with a written report
[//]: # (Image References)
[img_nVidia]: ./images/nVidia_model.png
[img_model]: ./images/model.png
[img_center]: ./images/center.png
[img_center_flipped]: ./images/center_flipped.png
[img_left]: ./images/left.png
[img_right]: ./images/right.png
[img_model_mse_loss]: ./images/model_mse_loss.png
## Rubric Points
### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/432/view) individually and describe how I addressed each point in my implementation.
---
### Files Submitted & Code Quality
#### 1. Submission includes all required files and can be used to run the simulator in autonomous mode
My project includes the following files:
* model.py: containing the script to create and train the model
* drive.py: for driving the car in autonomous mode
* model.h5: containing a trained convolution neural network
* video.mp4: driving around the test track using model.h5
* writeup_report.md: summarizing the results
#### 2. Submission includes functional code
Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing
```sh
python drive.py model.h5 run1
```
Create a video based on images found in the run1 directory.
```sh
python video.py run1
```
#### 3. Submission code is usable and readable
The model.py file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.
### Model Architecture and Training Strategy
#### 1. An appropriate model architecture has been employed
My model is based on a Keras implementation of the nVidia convolutional neural network.
![alt text][img_nVidia]
My model consists three 5x5 convolution layers, two 3x3 convolution layers, and three fully-connected layers. (model.py line 155-165)
The model includes Cropping2D layers to crop the hood of the car and the higher parts of the images (code line 148), and the data is normalized in the model using a Keras lambda layer (code line 151).
#### 2. Attempts to reduce overfitting in the model
The model contains two dropout layers (with a keep probability of 0.2) after the first and the second fully-connected layer in order to reduce overfitting (model.py lines 174, 178).
The model was trained and validated on different data sets to ensure that the model was not overfitting (code line 188-193). The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.
#### 3. Model parameter tuning
The model used an adam optimizer and mean squared error (MSE) loss function, so the learning rate was not tuned manually (model.py line 186).
The one parameter I did tune was the correction angle added to (subtracted from) the driving angle to pair with an image from the left (right) camera (model.py line 116).
I tried the network for correction angles of 0.2 ~ 0.3. The model.h5 file accompanying this submission was trained with a correction angle of 0.25.
#### 4. Appropriate training data
Training data was chosen to keep the vehicle driving on the road. I used a four combination of images: (1) center lane of the road, (2) left sides of the road (3) right sides of the road (4) left-right flipped version of the center's image
For details about how I created the training data, see the next section.
### Model Architecture and Training Strategy
#### 1. Solution Design Approach
The overall strategy for deriving a model architecture was to start with a proven convolution neural network model and then fine-tune the architecture and parameters until it could successfully drive the simulated car around Test track.
My first step was to use a convolution neural network model similar to the Nvidia because because it focused on mapping raw pixels from front-facing cameras directly to steering commands.
In order to gauge how well the model was working, I split my image and steering angle data into a training and validation set. I found that my first model had a low mean squared error on the training set but a high mean squared error on the validation set. This implied that the model was overfitting. To combat the overfitting, I modified the model so that two dropout layors was applied after first and second fully connected layer.
Next I implemented a cropping layer as the first layer in my network. This removed the top 50 and bottom 20 pixels from each input image before passing the image on to the convolution layers. The top 50 pixels tended to contain sky/trees/horizon, and the bottom 20 pixels contained the car's hood, all of which are irrelevant to steering and might confuse the model.
I then decided to augment the training dataset by additionally using images from the left and right cameras, as well as a left-right flipped version of the center camera's image.
Next, I used randomize\_image\_brightness() to convert the image to HSV colour space and apply random brightness reduction to V channel. This method helps the model to overcome the dirt track.
Besides, I followed the class's suggestion to use Python generators to serve training and validation data to model.fit_generator(). This made model.py run much faster and more smoothly.
The above steps run iteratively to evaluate how well the car was driving around test track. Finally, it was incredibly cool to see my model can drive correctly on Test track.
#### 2. Final Model Architecture
The final model architecture (model.py lines 109-144) consisted of a convolution neural network with the following layers and layer sizes ...
```text
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
cropping2d_1 (Cropping2D) (None, 90, 320, 3) 0
_________________________________________________________________
lambda_1 (Lambda) (None, 90, 320, 3) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 43, 158, 24) 1824
_________________________________________________________________
elu_1 (ELU) (None, 43, 158, 24) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 20, 77, 36) 21636
_________________________________________________________________
elu_2 (ELU) (None, 20, 77, 36) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 8, 37, 48) 43248
_________________________________________________________________
elu_3 (ELU) (None, 8, 37, 48) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 6, 35, 64) 27712
_________________________________________________________________
elu_4 (ELU) (None, 6, 35, 64) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 4, 33, 64) 36928
_________________________________________________________________
elu_5 (ELU) (None, 4, 33, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 8448) 0
_________________________________________________________________
dense_1 (Dense) (None, 100) 844900
_________________________________________________________________
elu_6 (ELU) (None, 100) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 100) 0
_________________________________________________________________
dense_2 (Dense) (None, 50) 5050
_________________________________________________________________
elu_7 (ELU) (None, 50) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 50) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 510
_________________________________________________________________
elu_8 (ELU) (None, 10) 0
_________________________________________________________________
dense_4 (Dense) (None, 1) 11
=================================================================
Total params: 981,819
Trainable params: 981,819
Non-trainable params: 0
_________________________________________________________________
```
Here is a visualization of the architecture (note: visualizing the architecture is optional according to the project rubric)
![alt text][img_model]
#### 3. Creation of the Training Set & Training Process
For each data sample, I used all three provided images (from the center, left, and right cameras) and also augmented the data with a flipped version of the center camera's image.
Here's an example image from the center camera.
![alt text][img_center]
Here's an image at the same time sample from the left camera.
![alt text][img_left]
Here's an image at the same time sample from the right camera.
![alt text][img_right]
Here's the image from the center camera, flipped left<->right.
![alt text][img_center_flipped]
Adding the left and right images to the training set paired with corrected angles should help the car recover when steering too far to the left or right.
Using the randomize image brightness make the convolutional network more robust. The easiest way to do this is to convert the RGB image to HSV colour space and darken it by a random factor. However, to prevent completely black images, a lower limit of 25% darkness was added. After this, the image was converted back to RGB colour space. (model.py lines 31-42)
Images were read in from files, and the flipped image added, using a Python generator. The generator processed lines of the file that stored image locations along with angle data (driving_log.csv) in batches of 32 (model.py lines 45-132). The generator also shuffled the array containing the training samples prior to each epoch, so that training data would not be fed to the network in the same order (model.py lines 132).
I trained the model for 5 epochs using an Adams optimizer, which was probably more epochs than necessary, but I wanted to be sure the validation error was plateauing. Here is the chart for visualizing the loss:
![alt text][img_model_mse_loss]
| 54.875598 | 434 | 0.740954 | eng_Latn | 0.970196 |
f4747afa31f2a3180bbb19a0f1edb7dce4e1e26c | 83 | md | Markdown | README.md | stere-marius/McMarketDiscordBot | 7eada6a87e52a6c755be4e75cf81d6046911266f | [
"MIT"
] | null | null | null | README.md | stere-marius/McMarketDiscordBot | 7eada6a87e52a6c755be4e75cf81d6046911266f | [
"MIT"
] | null | null | null | README.md | stere-marius/McMarketDiscordBot | 7eada6a87e52a6c755be4e75cf81d6046911266f | [
"MIT"
] | null | null | null | # McMarketDiscordBot
Discord bot for verifying user purchase for specific resource
| 27.666667 | 61 | 0.855422 | eng_Latn | 0.974557 |
f4753b4be631fd1f5d056bf558d4bdd3f43e94dd | 72 | md | Markdown | docs/examples/client.md | DavisDmitry/pyCubes | f234b9d3df959401c96c9314d5fd319524c27763 | [
"MIT"
] | 8 | 2021-09-27T04:45:14.000Z | 2022-03-14T12:42:53.000Z | docs/examples/client.md | DavisDmitry/pyCubes | f234b9d3df959401c96c9314d5fd319524c27763 | [
"MIT"
] | 57 | 2021-10-08T07:08:31.000Z | 2022-03-04T07:30:07.000Z | docs/examples/client.md | DavisDmitry/pyCubes | f234b9d3df959401c96c9314d5fd319524c27763 | [
"MIT"
] | null | null | null | ```python3 title="examples/client.py"
--8<-- "examples/client.py"
```
| 12 | 37 | 0.625 | eng_Latn | 0.66734 |
f4758a3e0c7a8c2257f85df1edc07794fbeb8699 | 1,679 | md | Markdown | docs/framework/wcf/diagnostics/etw/223-operationfaulted.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/etw/223-operationfaulted.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/etw/223-operationfaulted.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "223 - OperationFaulted"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-clr"
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: 2f7d89d7-3a6a-40fe-9610-5424eb6bbf61
caps.latest.revision: 6
author: "dotnet-bot"
ms.author: "dotnetcontent"
manager: "wpickett"
ms.workload:
- "dotnet"
---
# 223 - OperationFaulted
## Properties
|||
|-|-|
|ID|223|
|Keywords|EndToEndMonitoring, HealthMonitoring, Troubleshooting, ServiceModel|
|Level|Warning|
|Channel|Microsoft-Windows-Application Server-Applications/Analytic|
## Description
This event is emitted when the Service Model's default `OperationInvoker` has encountered an exception deriving from `FaultException` while invoking its method.
## Message
The '%1' method threw a FaultException when invoked by the OperationInvoker. The method call duration was '%2' ms.
## Details
|Data Item Name|Data Item Type|Description|
|--------------------|--------------------|-----------------|
|MethodName|`xs:string`|The CLR name of the method that was invoked by the `OperationInvoker`.|
|Duration|`xs:long`|The time, in milliseconds, that it took the `OperationInvoker` to invoke the method.|
|HostReference|`xs:string`|For Web-hosted services, this field uniquely identifies the service in the Web hierarchy. Its format is defined as 'Web Site Name Application Virtual Path|Service Virtual Path|ServiceName'. Example: 'Default Web Site/CalculatorApplication|/CalculatorService.svc|CalculatorService'.|
|AppDomain|`xs:string`|The string returned by AppDomain.CurrentDomain.FriendlyName.|
| 38.159091 | 331 | 0.718285 | eng_Latn | 0.597422 |
f4759ead0e11c2767f54fc9b94fc992ad72f1100 | 10,728 | md | Markdown | docs/2014/database-engine/behavior-changes-to-database-engine-features-in-sql-server-2014.md | cawrites/sql-docs | 58158eda0aa0d7f87f9d958ae349a14c0ba8a209 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-07T19:40:49.000Z | 2020-09-19T00:57:12.000Z | docs/2014/database-engine/behavior-changes-to-database-engine-features-in-sql-server-2014.md | cawrites/sql-docs | 58158eda0aa0d7f87f9d958ae349a14c0ba8a209 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/behavior-changes-to-database-engine-features-in-sql-server-2014.md | cawrites/sql-docs | 58158eda0aa0d7f87f9d958ae349a14c0ba8a209 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-03-11T20:30:39.000Z | 2020-05-07T19:40:49.000Z | ---
title: "Behavior Changes to Database Engine Features in SQL Server 2014 | Microsoft Docs"
ms.custom: ""
ms.date: "06/13/2017"
ms.prod: "sql-server-2014"
ms.reviewer: ""
ms.technology: "database-engine"
ms.topic: conceptual
helpviewer_keywords:
- "behavior changes [SQL Server]"
- "Database Engine [SQL Server], what's new"
- "Transact-SQL behavior changes"
ms.assetid: 65eaafa1-9e06-4264-b547-cbee8013c995
author: mashamsft
ms.author: mathoma
manager: craigg
---
# Behavior Changes to Database Engine Features in SQL Server 2014
This topic describes behavior changes in the [!INCLUDE[ssDE](../includes/ssde-md.md)]. Behavior changes affect how features work or interact in [!INCLUDE[ssCurrent](../includes/sscurrent-md.md)] as compared to earlier versions of [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)].
## <a name="SQL14"></a> Behavior Changes in [!INCLUDE[ssSQL14](../includes/sssql14-md.md)]
In earlier versions of [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)], queries against an XML document that contains strings over a certain length (more than 4020 characters) can return incorrect results. In [!INCLUDE[ssSQL14](../includes/sssql14-md.md)], such queries return the correct results.
## <a name="Denali"></a> Behavior Changes in [!INCLUDE[ssSQL11](../includes/sssql11-md.md)]
### Metadata Discovery
Improvements in the [!INCLUDE[ssDE](../includes/ssde-md.md)] starting with [!INCLUDE[ssSQL11](../includes/sssql11-md.md)] allow SQLDescribeCol to obtain more accurate descriptions of the expected results than those returned by SQLDescribeCol in previous versions of [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)]. For more information, see [Metadata Discovery](../relational-databases/native-client/features/metadata-discovery.md).
The [SET FMTONLY](/sql/t-sql/statements/set-fmtonly-transact-sql) option for determining the format of a response without actually running the query is replaced with [sp_describe_first_result_set (Transact-SQL)](/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql), [sp_describe_undeclared_parameters (Transact-SQL)](/sql/relational-databases/system-stored-procedures/sp-describe-undeclared-parameters-transact-sql), [sys.dm_exec_describe_first_result_set (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-describe-first-result-set-transact-sql), and [sys.dm_exec_describe_first_result_set_for_object (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-describe-first-result-set-for-object-transact-sql).
### Changes to Behavior in Scripting a SQL Server Agent Task
Starting with [!INCLUDE[ssSQL11](../includes/sssql11-md.md)], if you create a new job by copying the script from an existing job, the new job might inadvertently affect the existing job. To create a new job using the script from an existing job, manually delete the parameter *\@schedule_uid* which is usually the last parameter of the section which creates the job schedule in the existing job. This will create a new independent schedule for the new job without affecting existing jobs.
### Constant Folding for CLR User-Defined Functions and Methods
Starting with [!INCLUDE[ssSQL11](../includes/sssql11-md.md)], the following user-defined CLR objects are now foldable:
- Deterministic scalar-valued CLR user-defined functions.
- Deterministic methods of CLR user-defined types.
This improvement seeks to enhance performance when these functions or methods are called more than once with the same arguments. However, this change may cause unexpected results when non-deterministic functions or methods have been marked as deterministic in error. The determinism of a CLR function or method is indicated by the value of the `IsDeterministic` property of the `SqlFunctionAttribute` or `SqlMethodAttribute`.
### Behavior of STEnvelope() Method Has Changed with Empty Spatial Types
The behavior of the `STEnvelope` method with empty objects is now consistent with the behavior of other [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] spatial methods.
In [!INCLUDE[ssKatmai](../includes/sskatmai-md.md)], the `STEnvelope` method returned the following results when called with empty objects:
```sql
SELECT geometry::Parse('POINT EMPTY').STEnvelope().ToString()
-- returns POINT EMPTY
SELECT geometry::Parse('LINESTRING EMPTY').STEnvelope().ToString()
-- returns LINESTRING EMPTY
SELECT geometry::Parse('POLYGON EMPTY').STEnvelope().ToString()
-- returns POLYGON EMPTY
```
In [!INCLUDE[ssSQL11](../includes/sssql11-md.md)], the `STEnvelope` method now returns the following results when called with empty objects:
```sql
SELECT geometry::Parse('POINT EMPTY').STEnvelope().ToString()
-- returns GEOMETRYCOLLECTION EMPTY
SELECT geometry::Parse('LINESTRING EMPTY').STEnvelope().ToString()
-- returns GEOMETRYCOLLECTION EMPTY
SELECT geometry::Parse('POLYGON EMPTY').STEnvelope().ToString()
-- returns GEOMETRYCOLLECTION EMPTY
```
To determine whether a spatial object is empty, call the [STIsEmpty (geometry Data Type)](/sql/t-sql/spatial-geometry/stisempty-geometry-data-type) method.
### LOG Function Has New Optional Parameter
The `LOG` function now has an optional *base* parameter. For more information, see [LOG (Transact-SQL)](/sql/t-sql/functions/log-transact-sql).
### Statistics Computation during Partitioned Index Operations Has Changed
In [!INCLUDE[ssCurrent](../includes/sscurrent-md.md)], statistics are not created by scanning all rows in the table when a partitioned index is created or rebuilt. Instead, the Query Optimizer uses the default sampling algorithm to generate statistics. After upgrading a database with partitioned indexes, you may notice a difference in the histogram data for these indexes. This change in behavior may not affect query performance. To obtain statistics on partitioned indexes by scanning all the rows in the table, use `CREATE STATISTICS` or `UPDATE STATISTICS` with the `FULLSCAN` clause.
### Data Type Conversion by the XML value Method Has Changed
The internal behavior of the `value` method of the `xml` data type has changed. This method performs an XQuery against the XML and returns a scalar value of the specified [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] data type. The xs type has to be converted to the [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] data type. Previously, the `value` method internally converted the source value to an xs:string, then converted the xs:string to the [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] data type. In [!INCLUDE[ssCurrent](../includes/sscurrent-md.md)], the conversion to xs:string is skipped in the following cases:
|Source XS data type|Destination SQL Server data type|
|-------------------------|--------------------------------------|
|byte<br /><br /> short<br /><br /> int<br /><br /> integer<br /><br /> long<br /><br /> unsignedByte<br /><br /> unsignedShort<br /><br /> unsignedInt<br /><br /> unsignedLong<br /><br /> positiveInteger<br /><br /> nonPositiveInteger<br /><br /> negativeInteger<br /><br /> nonNegativeInteger|tinyint<br /><br /> smallint<br /><br /> int<br /><br /> bigint<br /><br /> decimal<br /><br /> numeric|
|decimal|decimal<br /><br /> numeric|
|float|real|
|double|float|
The new behavior improves performance when the intermediate conversion can be skipped. However, when data type conversions fail, you see different error messages than those that were raised when converting from the intermediate xs:string value. For example, if the value method failed to convert the `int` value 100000 to a `smallint`, the previous error message was:
`The conversion of the nvarchar value '100000' overflowed an INT2 column. Use a larger integer column.`
In [!INCLUDE[ssCurrent](../includes/sscurrent-md.md)], without the intermediate conversion to xs:string, the error message is:
`Arithmetic overflow error converting expression to data type smallint.`
### sqlcmd.exe Behavior Change in XML Mode
There are behavior changes if you use sqlcmd.exe with XML mode (:XML ON command) when executing a SELECT * from T FOR XML ....
### DBCC CHECKIDENT Revised Message
In [!INCLUDE[ssSQL11](../includes/sssql11-md.md)], the message returned by the DBCC CHECKIDENT command has changed only when it is used with RESEED *new_reseed_value* to change current identity value. The new message is *"Checking identity information: current identity value '\<current identity value>'"*. DBCC execution completed. If DBCC printed error messages, contact your system administrator."
In earlier versions, the message is *"Checking identity information: current identity value '\<current identity value>', current column value '\<current column value>'. DBCC execution completed. If DBCC printed error messages, contact your system administrator."* The message is unchanged when `DBCC CHECKIDENT` is specified with `NORESEED`, without a second parameter, or without a reseed value. For more information, see [DBCC CHECKIDENT (Transact-SQL)](/sql/t-sql/database-console-commands/dbcc-checkident-transact-sql).
### Behavior of exist() function on XML datatype has changed
The behavior of the `exist()` function has changed when comparing an XML data type with a null value to 0 (zero). Consider the following example:
```sql
DECLARE @test XML;
SET @test = null;
SELECT COUNT(1) WHERE @test.exist('/dogs') = 0;
```
In earlier versions, this comparison return 1 (true); now, this comparison returns 0 (zero, false).
The following comparisons have not changed:
```sql
DECLARE @test XML;
SET @test = null;
SELECT COUNT(1) WHERE @test.exist('/dogs') = 1; -- 0 expected, 0 returned
SELECT COUNT(1) WHERE @test.exist('/dogs') IS NULL; -- 1 expected, 1 returned
```
## See Also
[Breaking Changes to Database Engine Features in SQL Server 2014](breaking-changes-to-database-engine-features-in-sql-server-2016.md)
[Deprecated Database Engine Features in SQL Server 2014](deprecated-database-engine-features-in-sql-server-2016.md)
[Discontinued Database Engine Functionality in SQL Server 2014](discontinued-database-engine-functionality-in-sql-server-2016.md)
[ALTER DATABASE Compatibility Level (Transact-SQL)](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level)
| 83.8125 | 851 | 0.742077 | eng_Latn | 0.942674 |
f4771a6fe79980800e0c9f75cdcc79d30430a09d | 2,622 | md | Markdown | wdk-ddi-src/content/avc/ns-avc-_avc_peer_do_list.md | Acidburn0zzz/windows-driver-docs-ddi | c16b5cd6c3f59ca5f963ff86f3f0926e3b910231 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-11-06T01:28:38.000Z | 2018-11-06T01:28:38.000Z | wdk-ddi-src/content/avc/ns-avc-_avc_peer_do_list.md | Acidburn0zzz/windows-driver-docs-ddi | c16b5cd6c3f59ca5f963ff86f3f0926e3b910231 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/avc/ns-avc-_avc_peer_do_list.md | Acidburn0zzz/windows-driver-docs-ddi | c16b5cd6c3f59ca5f963ff86f3f0926e3b910231 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:avc._AVC_PEER_DO_LIST
title: "_AVC_PEER_DO_LIST"
author: windows-driver-content
description: The AVC_PEER_DO_LIST describes all nonvirtual (peer) instances of avc.sys.
old-location: stream\avc_peer_do_list.htm
tech.root: stream
ms.assetid: 5420df9b-35e7-49b4-97dc-a1d61623551c
ms.date: 04/23/2018
ms.keywords: "*PAVC_PEER_DO_LIST, AVC_PEER_DO_LIST, AVC_PEER_DO_LIST structure [Streaming Media Devices], PAVC_PEER_DO_LIST, PAVC_PEER_DO_LIST structure pointer [Streaming Media Devices], _AVC_PEER_DO_LIST, avc/AVC_PEER_DO_LIST, avc/PAVC_PEER_DO_LIST, avcref_69feff07-d80c-4d5a-a5d8-fe942dfc5e26.xml, stream.avc_peer_do_list"
ms.topic: struct
req.header: avc.h
req.include-header: Avc.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- avc.h
api_name:
- AVC_PEER_DO_LIST
product:
- Windows
targetos: Windows
req.typenames: AVC_PEER_DO_LIST, *PAVC_PEER_DO_LIST
---
# _AVC_PEER_DO_LIST structure
## -description
The AVC_PEER_DO_LIST describes all nonvirtual (peer) instances of <i>avc.sys</i>.
## -struct-fields
### -field Count
Ignored on input. On output, set to the number of objects in the list. If zero, the caller must not attempt to dereference the <b>Objects</b> member (it is set to <b>NULL</b>).
### -field Objects
Ignored on input. On output (and if the <b>Count</b> member is not zero) <b>Objects</b> contains a pointer to a contiguous array of DEVICE_OBJECT pointers. The caller must release the reference held on each object (by using <b>ObDereferenceObject</b>), and free the memory containing the list (by using <b>ExFreePool</b>) when finished with it.
## -remarks
This structure is used with the <a href="https://msdn.microsoft.com/library/windows/hardware/ff554168">AVC_FUNCTION_PEER_DO_LIST</a> function code.
This structure is used only as a member inside the AVC_MULTIFUNC_IRB structure. It is not used by itself.
See <a href="https://msdn.microsoft.com/3b4ec139-ff01-40bd-8e29-92f554180585">How to Use Avc.sys</a> For information about building and sending an AV/C command.
## -see-also
<a href="https://msdn.microsoft.com/library/windows/hardware/ff554145">AVC_FUNCTION</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff554168">AVC_FUNCTION_PEER_DO_LIST</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff554177">AVC_MULTIFUNC_IRB</a>
| 26.22 | 344 | 0.769641 | eng_Latn | 0.455267 |
f477370f03cd254981fb55cc299fb38a2f57666e | 445 | md | Markdown | README.md | khaninejad/Contact | 6b7843d74f6e5c304399a0eb92b24e61d8096224 | [
"MIT"
] | null | null | null | README.md | khaninejad/Contact | 6b7843d74f6e5c304399a0eb92b24e61d8096224 | [
"MIT"
] | null | null | null | README.md | khaninejad/Contact | 6b7843d74f6e5c304399a0eb92b24e61d8096224 | [
"MIT"
] | null | null | null | Simple Laravel Contact Form
===================
This is a basic contact form built in **laravel**
----------
Documents
-------------
Clone this git
> git clone https://github.com/khaninejad/Contact.git
> cd Contact
> composer update
> php artisan migrate
Edit .env file for mysql & mail driver
That's All
## License
The Laravel framework is open-sourced software licensed under the [MIT license](http://opensource.org/licenses/MIT).
| 16.481481 | 116 | 0.678652 | eng_Latn | 0.817487 |
f47739d31b2de4bb342a08d31ba47a33469b159e | 2,080 | md | Markdown | _posts/2010-08-19-1329.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | 1 | 2020-11-20T20:39:54.000Z | 2020-11-20T20:39:54.000Z | _posts/2010-08-19-1329.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | _posts/2010-08-19-1329.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | ---
title: >
Teen Skin
layout: post
permalink: /view/1329
votes: 0
preview: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAABDUlEQVRIie1XwQ2DMAy8VOzAp59KbFB3E2YoM9EZ2KTuBkj99MMSpA9UShKDghTDh3ugYOJcznFMYqzt8QPRDRKYn6LdQ4z7KWaghPD5uK634xvIuK71WKV4EkFNqMTHrME0IAtNhmhoWGYAVFWRY8WEROCzrr60gTV28vKa6XR1ptM738y6HeXru5wLz/L+tKtGXIYwuyZvx2dyCHxlVzR5W3a+UC0+VX1Cfo7KNFTOZpdSSGW+fdZPYwkzTMtHVUHMz4USE7gv93H0hZs9ORy+sZQoLR4AMz2/PEhOnzv3ot1DjPve55eD7+A7+LaDU18IJ4YFQDAAxjYjqr4YyPcji//9yP+/D0xhOxW+Xv1kchsgNNMAAAAASUVORK5CYII="
---
<dl class="side-by-side">
<dt>Preview</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAABDUlEQVRIie1XwQ2DMAy8VOzAp59KbFB3E2YoM9EZ2KTuBkj99MMSpA9UShKDghTDh3ugYOJcznFMYqzt8QPRDRKYn6LdQ4z7KWaghPD5uK634xvIuK71WKV4EkFNqMTHrME0IAtNhmhoWGYAVFWRY8WEROCzrr60gTV28vKa6XR1ptM738y6HeXru5wLz/L+tKtGXIYwuyZvx2dyCHxlVzR5W3a+UC0+VX1Cfo7KNFTOZpdSSGW+fdZPYwkzTMtHVUHMz4USE7gv93H0hZs9ORy+sZQoLR4AMz2/PEhOnzv3ot1DjPve55eD7+A7+LaDU18IJ4YFQDAAxjYjqr4YyPcji//9yP+/D0xhOxW+Xv1kchsgNNMAAAAASUVORK5CYII=">
</dd>
<dt>Original</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAAAgCAYAAACinX6EAAAACXBIWXMAAA7DAAAOwwHHb6hkAAABmklEQVRoQ+2Z0RHCIAyGcQ8f3QB3ciZ2akdxBiSeaEgpqQRESuvlWpuC5OMHQj0pZa1KHFpfU241GZP0n263pJ91zjP7iPABB0Drp02OBjaAo939lNlpsinzdWefXSNcgFXtAIB7p7QCQB3ZvQ/KrNz7UH/QwOEB4LmgxBzQnQJKAxDJ/2dD4LkUxm33qwAXYG3/Lya55G/UDpCrvwsAMJFxgeT6/x4AzfJyA10r1xxALI3FjX37YZZ8pb1bIWxRzl8C4PJ7CogCweU5WN0AwA3FSuBgcf5uAHCB5PqlAKwxFkxQz2dsQxBGq8AEFUsatVp2kbG9AAhAhHvtA4BQAbCbrKma4gqAHscN3qoA/xwNtksANGjJd/pabfmaLQS+ugtb2519eZ9VoyTYWFkI+H6+RC2mjt0CADgAwp/heigANPjhAODh4YfEcArwgfthMCSAYw5AK0ILBcAqk1wKay2DtN5Wk2AzAMVWAbLZ8ZuerWc2EXICCT7uzyjrD7hepMYW7mILy+MkCI//bAW0AIDD+xZA8UywMoAHtUX0tm2IOOYAAAAASUVORK5CYII=">
</dd>
<dt>Title</dt>
<dd>Teen Skin</dd>
<dt>Description</dt>
<dd>Just a 14-15 year old Teenager skin</dd>
<dt>Added By</dt>
<dd>Scaream</dd>
<dt>Added On</dt>
<dd>2010-08-19</dd>
<dt>Votes</dt>
<dd>0</dd>
</dl>
| 69.333333 | 706 | 0.884615 | yue_Hant | 0.481357 |
f47786207dc977e5551025bde18850d032d9b88c | 2,274 | md | Markdown | README.md | robert-zaremba/create-near-app | 8b9baea1af07d13ea3b2ccacf33b6f36bdaf1a4d | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | robert-zaremba/create-near-app | 8b9baea1af07d13ea3b2ccacf33b6f36bdaf1a4d | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | robert-zaremba/create-near-app | 8b9baea1af07d13ea3b2ccacf33b6f36bdaf1a4d | [
"Apache-2.0",
"MIT"
] | null | null | null | create-near-app
===============
[![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/nearprotocol/create-near-app)
Quickly build apps backed by the [NEAR](https://near.org) blockchain
Prerequisites
=============
Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `12+`.
**Note**: if using Node version 13 please be advised that you will need version >= 13.7.0
Getting Started
===============
You can configure two main aspects of your new NEAR project:
* frontend: do you want a **vanilla JavaScript** stack or **React**?
* backend: do you want your contracts to be written in **Rust** or **AssemblyScript** (a dialect of TypeScript)?
| command | frontend | backend |
| ------------------------------------------------------ | ---------- | -------------- |
| `npx create-near-app new-awesome-app` | React | AssemblyScript |
| `npx create-near-app --vanilla new-awesome-app` | vanilla JS | AssemblyScript |
| `npx create-near-app --rust new-awesome-app` | React | Rust |
| `npx create-near-app --vanilla --rust new-awesome-app` | vanilla JS | Rust |
Develop your own Dapp
=====================
Follow the instructions in the README.md in the project you just created! 🚀
Getting Help
============
Check out our [documentation](https://docs.near.org) or chat with us on [Discord](http://near.chat). We'd love to hear from you!
Contributing
============
To make changes to `create-near-app` itself:
* clone the repository (Windows users, [use `git clone -c core.symlinks=true`](https://stackoverflow.com/a/42137273/249801))
* in your terminal, enter one of the folders inside `templates`, such as `templates/vanilla`
* now you can run `yarn` to install dependencies and `yarn dev` to run the local development server, just like you can in a new app created with `create-near-app`
License
=======
This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0).
See [LICENSE](LICENSE) and [LICENSE-APACHE](LICENSE-APACHE) for details.
| 39.206897 | 164 | 0.637203 | eng_Latn | 0.929677 |
f477fc72f90016f8e968e55b6d93ca02b1890b2c | 6,700 | md | Markdown | windows-driver-docs-pr/ifs/mrxcreate.md | msarcletti/windows-driver-docs | 177447a51fe3b95ccba504c0e91f9f3889cab16c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-02-26T02:51:21.000Z | 2020-02-26T02:51:21.000Z | windows-driver-docs-pr/ifs/mrxcreate.md | msarcletti/windows-driver-docs | 177447a51fe3b95ccba504c0e91f9f3889cab16c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-21T17:24:17.000Z | 2021-01-21T17:24:17.000Z | windows-driver-docs-pr/ifs/mrxcreate.md | msarcletti/windows-driver-docs | 177447a51fe3b95ccba504c0e91f9f3889cab16c | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-08-11T00:01:58.000Z | 2021-11-24T02:51:30.000Z | ---
title: MRxCreate routine
description: TheMRxCreate routine is called by RDBSS to request that the network mini-redirector create a file system object.
ms.assetid: d1b664cf-37b6-4c65-8634-21695af2db21
keywords: ["MRxCreate routine Installable File System Drivers", "PMRX_CALLDOWN"]
topic_type:
- apiref
api_name:
- MRxCreate
api_location:
- mrx.h
api_type:
- UserDefined
ms.date: 11/28/2017
ms.localizationpriority: medium
---
# MRxCreate routine
The*MRxCreate* routine is called by [RDBSS](https://docs.microsoft.com/windows-hardware/drivers/ifs/the-rdbss-driver-and-library) to request that the network mini-redirector create a file system object.
Syntax
------
```ManagedCPlusPlus
PMRX_CALLDOWN MRxCreate;
NTSTATUS MRxCreate(
_Inout_ PRX_CONTEXT RxContext
)
{ ... }
```
Parameters
----------
*RxContext* \[in, out\]
A pointer to the RX\_CONTEXT structure. This parameter contains the IRP that is requesting the operation.
Return value
------------
*MRxCreate* returns STATUS\_SUCCESS on success or an appropriate NTSTATUS value, such as one of the following:
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Return code</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><strong>STATUS_INSUFFICIENT_RESOURCES</strong></td>
<td align="left"><p>There were insufficient resources to complete the operation.</p></td>
</tr>
<tr class="even">
<td align="left"><strong>STATUS_NETWORK_ACCESS_DENIED</strong></td>
<td align="left"><p>Network access was denied. This error can be returned if the network mini-redirector was asked to open a new file on a read-only share.</p></td>
</tr>
<tr class="odd">
<td align="left"><strong>STATUS_NOT_IMPLEMENTED</strong></td>
<td align="left"><p>A feature that is requested, such as remote boot or a remote page file, is not implemented.</p></td>
</tr>
<tr class="even">
<td align="left"><strong>STATUS_NOT_SUPPORTED</strong></td>
<td align="left"><p>A feature that is requested, such as extended attributes, is not supported.</p></td>
</tr>
<tr class="odd">
<td align="left"><strong>STATUS_OBJECT_NAME_COLLISION</strong></td>
<td align="left"><p>The network mini-redirector was asked to create a file that already exists.</p></td>
</tr>
<tr class="even">
<td align="left"><strong>STATUS_OBJECT_NAME_NOT_FOUND</strong></td>
<td align="left"><p>The object name was not found. This error can be returned if the network mini-redirector was asked to open a file that doesn't exist.</p></td>
</tr>
<tr class="odd">
<td align="left"><strong>STATUS_OBJECT_PATH_NOT_FOUND</strong></td>
<td align="left"><p>The object path was not found. This error can be returned if an NTFS stream object was requested and the remote file system does not support streams.</p></td>
</tr>
<tr class="even">
<td align="left"><strong>STATUS_REPARSE</strong></td>
<td align="left"><p>A reparse is required to handle a symbolic link.</p></td>
</tr>
<tr class="odd">
<td align="left"><strong>STATUS_RETRY</strong></td>
<td align="left"><p>The operation should be retried. This error can be returned if the network mini-redirector encountered a sharing violation or an access denied error.</p></td>
</tr>
</tbody>
</table>
Remarks
-------
*MRxCreate* is called by RDBSS to request that the network mini-redirector open a file system object across the network. This call is issued by RDBSS in response to receiving an [**IRP\_MJ\_CREATE**](irp-mj-create.md) request.
Before calling *MRxCreate*, RDBSS modifies the following members in the RX\_CONTEXT structure pointed to by the *RxContext* parameter:
**pRelevantSrvOpen** is set to the SRV\_OPEN structure.
**Create.pSrvCall** is set to the SRV\_CALL structure.
**Create.NtCreateParameters** is set to the requested NT\_CREATE\_PARAMETERS.
In the context of a network mini-redirector, a file object refers to the associated file control block (FCB) and file object extension (FOBX) structures. There is a one to one correspondence between file objects and FOBXs. Many file objects will refer to the same FCB, which represents a single file on a remote server. A client can have several different open requests (NtCreateFile requests) on the same FCB and each of these will create a new file object. RDBSS and network mini-redirectors can choose to send fewer *MRxCreate* requests than the NtCreateFile requests received, in effect sharing an SRV\_OPEN structure among several FOBXs.
If the *MRxCreate* request was for a file overwrite and *MRxCreate* returned STATUS\_SUCCESS, then RDBSS will acquire the paging I/O resource and truncate the file. If the file is being cached by cache manager, RDBSS will update the sizes the cache manager has with the ones just received from the server.
Before returning, *MRxCreate* must set the **CurrentIrp->IoStatus.Information** member of the RX\_CONTEXT structure pointed to by the *RxContext* parameter.
Requirements
------------
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td align="left"><p>Target platform</p></td>
<td align="left">Desktop</td>
</tr>
<tr class="even">
<td align="left"><p>Header</p></td>
<td align="left">Mrx.h (include Mrx.h)</td>
</tr>
</tbody>
</table>
## See also
[**MRxAreFilesAliased**](https://docs.microsoft.com/windows-hardware/drivers/ddi/mrx/nc-mrx-pmrx_chkfcb_calldown)
[**MRxCleanupFobx**](https://docs.microsoft.com/previous-versions/windows/hardware/drivers/ff549841(v=vs.85))
[**MRxCloseSrvOpen**](https://docs.microsoft.com/windows-hardware/drivers/ddi/mrx/nc-mrx-pmrx_calldown)
[**MRxCollapseOpen**](mrxcollapseopen.md)
[**MRxCreate**](mrxcreate.md)
[**MRxDeallocateForFcb**](https://docs.microsoft.com/windows-hardware/drivers/ddi/mrx/nc-mrx-pmrx_deallocate_for_fcb)
[**MRxDeallocateForFobx**](https://docs.microsoft.com/windows-hardware/drivers/ddi/mrx/nc-mrx-pmrx_deallocate_for_fobx)
[**MRxExtendForCache**](https://docs.microsoft.com/windows-hardware/drivers/ddi/mrx/nc-mrx-pmrx_extendfile_calldown)
[**MRxExtendForNonCache**](mrxextendfornoncache.md)
[**MRxFlush**](mrxflush.md)
[**MRxForceClosed**](https://docs.microsoft.com/windows-hardware/drivers/ddi/mrx/nc-mrx-pmrx_forceclosed_calldown)
[**MRxIsLockRealizable**](https://docs.microsoft.com/windows-hardware/drivers/ddi/mrx/nc-mrx-pmrx_is_lock_realizable)
[**MRxShouldTryToCollapseThisOpen**](mrxshouldtrytocollapsethisopen.md)
[**MRxTruncate**](mrxtruncate.md)
[**MRxZeroExtend**](mrxzeroextend.md)
| 37.222222 | 643 | 0.727164 | eng_Latn | 0.864081 |
f47811f8239a6527d1902a426158de95a467820b | 4,374 | md | Markdown | sdk-api-src/content/tom/nf-tom-itextfont-setlanguageid.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/tom/nf-tom-itextfont-setlanguageid.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/tom/nf-tom-itextfont-setlanguageid.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:tom.ITextFont.SetLanguageID
title: ITextFont::SetLanguageID (tom.h)
description: Sets the language ID or language code identifier (LCID).
helpviewer_keywords: ["ITextFont interface [Windows Controls]","SetLanguageID method","ITextFont.SetLanguageID","ITextFont::SetLanguageID","SetLanguageID","SetLanguageID method [Windows Controls]","SetLanguageID method [Windows Controls]","ITextFont interface","_win32_ITextFont_SetLanguageID","_win32_ITextFont_SetLanguageID_cpp","controls.ITextFont_SetLanguageID","controls._win32_ITextFont_SetLanguageID","tom/ITextFont::SetLanguageID"]
old-location: controls\ITextFont_SetLanguageID.htm
tech.root: Controls
ms.assetid: VS|Controls|~\controls\richedit\textobjectmodel\textobjectmodelreference\textobjectmodelinterfaces\setlanguageid.htm
ms.date: 12/05/2018
ms.keywords: ITextFont interface [Windows Controls],SetLanguageID method, ITextFont.SetLanguageID, ITextFont::SetLanguageID, SetLanguageID, SetLanguageID method [Windows Controls], SetLanguageID method [Windows Controls],ITextFont interface, _win32_ITextFont_SetLanguageID, _win32_ITextFont_SetLanguageID_cpp, controls.ITextFont_SetLanguageID, controls._win32_ITextFont_SetLanguageID, tom/ITextFont::SetLanguageID
req.header: tom.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows Vista [desktop apps only]
req.target-min-winversvr: Windows Server 2003 [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll: Msftedit.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- ITextFont::SetLanguageID
- tom/ITextFont::SetLanguageID
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- Msftedit.dll
api_name:
- ITextFont.SetLanguageID
---
# ITextFont::SetLanguageID
## -description
Sets the language ID or language code identifier (LCID).
## -parameters
### -param Value [in]
Type: <b>long</b>
The new language identifier. The low word contains the language identifier. The high word is either zero or it contains the high word of the locale identifier LCID. For more information, see <a href="/windows/desktop/Intl/locale-identifiers">Locale Identifiers</a>.
## -returns
Type: <b>HRESULT</b>
If the method succeeds, it returns <b>S_OK</b>. If the method fails, it returns one of the following COM error codes. For more information about COM error codes, see <a href="/windows/desktop/com/error-handling-in-com">Error Handling in COM</a>.
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_INVALIDARG</b></dt>
</dl>
</td>
<td width="60%">
Invalid argument.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>CO_E_RELEASED</b></dt>
</dl>
</td>
<td width="60%">
The font object is attached to a range that has been deleted.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_ACCESSDENIED</b></dt>
</dl>
</td>
<td width="60%">
Write access is denied.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_OUTOFMEMORY</b></dt>
</dl>
</td>
<td width="60%">
Insufficient memory.
</td>
</tr>
</table>
## -remarks
If the high nibble of <i>Value</i> is <a href="/windows/win32/api/tom/ne-tom-tomconstants">tomCharset</a>, set the <i>charrep</i> from the <i>charset</i> in the low byte and the pitch and family from the next byte. See also <a href="/windows/desktop/api/tom/nf-tom-itextfont2-setcharrep">ITextFont2::SetCharRep</a>.
If the high nibble of <i>Value</i> is <a href="/windows/win32/api/tom/ne-tom-tomconstants">tomCharRepFromLcid</a>, set the <i>charrep</i> from the LCID and set the LCID as well. See <a href="/windows/desktop/api/tom/nf-tom-itextfont-getlanguageid">ITextFont::GetLanguageID</a> for more information.
To set the BCP-47 language tag, such as "en-US", call <a href="/windows/desktop/api/tom/nf-tom-itextrange2-settext2">ITextRange2::SetText2</a> and set the <a href="/windows/win32/api/tom/ne-tom-tomconstants">tomLanguageTag</a> and <i>bstr</i> with the language tag.
## -see-also
<b>Conceptual</b>
<a href="/windows/desktop/api/tom/nf-tom-itextfont-getlanguageid">GetLanguageID</a>
<a href="/windows/desktop/api/tom/nn-tom-itextfont">ITextFont</a>
<b>Reference</b>
<a href="/windows/desktop/Controls/text-object-model">Text Object Model</a> | 29.755102 | 439 | 0.747599 | yue_Hant | 0.584967 |
f478547bcaaad86c4e7384abed8d1302655e4c5a | 2,994 | md | Markdown | docs/app-dev/app-architecture.md | hongyuefan/tendermint | 851c0dc4f3088723c724cb01c4877ee78c3d6ffc | [
"Apache-2.0"
] | 1 | 2022-01-31T01:11:06.000Z | 2022-01-31T01:11:06.000Z | docs/app-dev/app-architecture.md | hongyuefan/tendermint | 851c0dc4f3088723c724cb01c4877ee78c3d6ffc | [
"Apache-2.0"
] | 39 | 2021-10-07T23:24:48.000Z | 2022-03-31T11:20:01.000Z | docs/app-dev/app-architecture.md | hongyuefan/tendermint | 851c0dc4f3088723c724cb01c4877ee78c3d6ffc | [
"Apache-2.0"
] | 1 | 2020-10-18T04:41:33.000Z | 2020-10-18T04:41:33.000Z | ---
order: 3
---
# Application Architecture Guide
Here we provide a brief guide on the recommended architecture of a
Tendermint blockchain application.
The following diagram provides a superb example:
![cosmos-tendermint-stack](../imgs/cosmos-tendermint-stack-4k.jpg)
We distinguish here between two forms of "application". The first is the
end-user application, like a desktop-based wallet app that a user downloads,
which is where the user actually interacts with the system. The other is the
ABCI application, which is the logic that actually runs on the blockchain.
Transactions sent by an end-user application are ultimately processed by the ABCI
application after being committed by the Tendermint consensus.
The end-user application in this diagram is the [Lunie](https://lunie.io/) app, located at the bottom
left. Lunie communicates with a REST API exposed by the application.
The application with Tendermint nodes and verifies Tendermint light-client proofs
through the Tendermint Core RPC. The Tendermint Core process communicates with
a local ABCI application, where the user query or transaction is actually
processed.
The ABCI application must be a deterministic result of the Tendermint
consensus - any external influence on the application state that didn't
come through Tendermint could cause a consensus failure. Thus _nothing_
should communicate with the ABCI application except Tendermint via ABCI.
If the ABCI application is written in Go, it can be compiled into the
Tendermint binary. Otherwise, it should use a unix socket to communicate
with Tendermint. If it's necessary to use TCP, extra care must be taken
to encrypt and authenticate the connection.
All reads from the ABCI application happen through the Tendermint `/abci_query`
endpoint. All writes to the ABCI application happen through the Tendermint
`/broadcast_tx_*` endpoints.
The Light-Client Daemon is what provides light clients (end users) with
nearly all the security of a full node. It formats and broadcasts
transactions, and verifies proofs of queries and transaction results.
Note that it need not be a daemon - the Light-Client logic could instead
be implemented in the same process as the end-user application.
Note for those ABCI applications with weaker security requirements, the
functionality of the Light-Client Daemon can be moved into the ABCI
application process itself. That said, exposing the ABCI application process
to anything besides Tendermint over ABCI requires extreme caution, as
all transactions, and possibly all queries, should still pass through
Tendermint.
See the following for more extensive documentation:
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
- [Tendermint RPC Docs](https://docs.tendermint.com/master/rpc/)
- [Tendermint in Production](../tendermint-core/running-in-production.md)
- [ABCI spec](https://github.com/tendermint/tendermint/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)
| 49.081967 | 111 | 0.811289 | eng_Latn | 0.993428 |
f478a0a940d982e4c25ebf6191c433f8a46880a0 | 3,074 | md | Markdown | README.md | olivettigroup/table_extractor | df800c5b7f394aad378fddedd48fe954231ec015 | [
"MIT"
] | 16 | 2019-06-01T17:32:44.000Z | 2022-01-15T04:53:41.000Z | README.md | lane-lh/table_extractor | df800c5b7f394aad378fddedd48fe954231ec015 | [
"MIT"
] | 2 | 2019-12-18T11:45:01.000Z | 2021-03-01T12:04:06.000Z | README.md | lane-lh/table_extractor | df800c5b7f394aad378fddedd48fe954231ec015 | [
"MIT"
] | 16 | 2019-12-17T13:06:55.000Z | 2022-02-25T04:21:01.000Z | # table_extractor
Code and data used in the paper, A Machine Learning Approach to Zeolite Synthesis Enabled by Automatic Literature Data Extraction
There are two main components to this repository:
1. table_extractor code
2. zeolite synthesis data
# 1. Table Extraction Code
This code extracts tables into json format from HTML/XML files. These HTML/XML files need to be supplied by the researcher. The code is written in Python3. To run the code:
1. Fork this repository
2. Download the Olivetti group materials science FastText word embeddings
- Available here: https://figshare.com/s/70455cfcd0084a504745
- Download all 4 files and place in the tableextractor/bin folder
3. Install all dependencies
- json, pandas, spacy, bs4, gensim, numpy, unidecode, sklearn, scipy, traceback
4. Place all files in tableextractor/data
5. Use Jupyter (Table Extractor Tutorial) to run the code
The code takes in a list of files and corresponding DOIs and returns a list of all tables extracted from the files as JSON objects. Currently, the code supports files from ACS, APS, Elsevier, Wiley, Springer, and RSC.
# 2. Zeolite Synthesis Data
The germanium containing zeolite data set used in the paper is publicly available in both Excel and CSV formats. Here is a description of each feature:
doi- DOI of the paper the synthesis route comes from
Si:B- molar amount of each element/compound/molecule used in the synthesis. Amounts are normalized to give Si=1 or Ge=1 if Si=0
Time- crystallization time in hours
Temp- crystallization temperature in °C
SDA Type- name given to the organic structure directing agent (OSDA) molecule in the paper
SMILES- the SMILES representation of the OSDA molecule
SDA_Vol- the DFT calculated molar volume of the OSDA molecule in bohr^3
SDA_SA- the DFT calculated surface area of the OSDA molecule in bohr^2
SDA_KFI- the DFT calculated Kier flexibility index of the OSDA molecule
From?- the location within a paper the compositional information is extracted. Either Table, Text, or Supplemental
Extracted- Products of the synthesis as they appear in the paper
Zeo1- the primary zeolite (zeotype) material made in the synthesis
Zeo2- the secondary zeolite (zeotype) material made in the synthesis
Dense1- the primary dense phase made in the synthesis
Dense2- the secondary dense phase made in the synthesis
Am- whether an amorphous phase is made in (or remains after) the synthesis
Other- any other unidentified phases made in the synthesis
ITQ- whether the synthesis made a zeolite in the ITQ series
FD1- the framework density of Zeo1
MR1- the maximum ring size of Zeo1
FD2- the framework density of Zeo2
MR2- the framework density of Zeo2
# Citing
If you use this code or data, please cite the following as appropriate.
A Machine Learning Approach to Zeolite Synthesis Enabled by Automatic Literature Data Extraction
Zach Jensen, Edward Kim, Soonhyoung Kwon, Terry Z. H. Gani, Yuriy Román-Leshkov, Manuel Moliner, Avelino Corma, and Elsa Olivetti
ACS Central Science Article ASAP
DOI: 10.1021/acscentsci.9b00193
| 40.986667 | 218 | 0.794079 | eng_Latn | 0.99193 |
f478c6b13cd45f3f72dbd2e8c2fbdf380f703cdc | 6,953 | md | Markdown | design/24543/conservative-inner-frame.md | thepudds/proposal | 1a666c4e4385c6c305ef53fed824a88eb240fd53 | [
"BSD-3-Clause"
] | 3,044 | 2015-07-29T05:10:38.000Z | 2022-03-31T19:59:57.000Z | design/24543/conservative-inner-frame.md | thepudds/proposal | 1a666c4e4385c6c305ef53fed824a88eb240fd53 | [
"BSD-3-Clause"
] | 43 | 2015-11-01T02:27:06.000Z | 2022-03-20T16:20:20.000Z | design/24543/conservative-inner-frame.md | LaudateCorpus1/proposal | 8b5daebee3198c7b6265b40c06b077be5e4c5b4b | [
"BSD-3-Clause"
] | 487 | 2015-08-04T23:29:47.000Z | 2022-03-19T17:13:57.000Z | # Proposal: Conservative inner-frame scanning for non-cooperative goroutine preemption
Author(s): Austin Clements
Last updated: 2019-01-21
Discussion at https://golang.org/issue/24543.
## Introduction
Up to and including Go 1.10, Go has used cooperative preemption with
safe-points only at function calls.
We propose that the Go implementation switch to *non-cooperative*
preemption.
The background and rationale for this proposal are detailed in the
[top-level proposal document](../24543-non-cooperative-preemption.md).
This document details a specific approach to non-cooperative
preemption that uses conservative GC techniques to find live pointers
in the inner-most frame of a preempted goroutine.
## Proposal
I propose that Go use POSIX signals (or equivalent) to interrupt
running goroutines and capture their CPU state.
If a goroutine is interrupted at a point that must be GC atomic, as
detailed in ["Handling
unsafe-points"](../24543-non-cooperative-preemption.md#handling-unsafe-points)
in the top-level proposal, the runtime can simply let the goroutine
resume and try again later.
This leaves the problem of how to find local GC roots—live pointers on
the stack—of a preempted goroutine.
Currently, the Go compiler records *stack maps* at every call site
that tell the Go runtime where live pointers are in the call's stack
frame.
Since Go currently only preempts at function calls, this is sufficient
to find all live pointers on the stack at any cooperative preemption
point.
But an interrupted goroutine is unlikely to have a liveness map at the
interrupted instruction.
I propose that when a goroutine is preempted non-cooperatively, the
garbage collector scan the inner-most stack frame and registers of
that goroutine *conservatively*, treating anything that could be a
valid heap pointer as a heap pointer, while using the existing
call-site stack maps to precisely scan all other frames.
## Rationale
Compared to the alternative proposal of [emitting liveness maps for
every instruction](safe-points-everywhere.md), this proposal is far
simpler to implement and much less likely to have a long bug tail.
It will also make binaries about 5% smaller.
The Go 1.11 compiler began emitting stack and register maps in support
of non-cooperative preemption as well as debugger call injection, but
this increased binary sizes by about 5%.
This proposal will allow us to roll that back.
This approach has the usual problems of conservative GC, but in a
severely limited scope.
In particular, it can cause heap allocations to remain live longer
than they should ("GC leaks").
However, unlike conservatively scanning the whole stack (or the whole
heap), it's unlikely that any incorrectly retained objects would last
more than one GC cycle because the inner frame typically changes
rapidly.
Hence, it's unlikely that an inner frame would remain the inner frame
across multiple cycles.
Furthermore, this can be combined with cooperative preemption to
further reduce the chances of retaining a dead object.
Neither stack scan preemption nor scheduler preemption have tight time
bounds, so the runtime can wait for a cooperative preemption before
falling back to non-cooperative preemption.
STW preemptions have a tight time bound, but don't scan the stack, and
hence can use non-cooperative preemption immediately.
### Stack shrinking
Currently, the garbage collector triggers stack shrinking during stack
scanning, but this will have to change if stack scanning may not have
precise liveness information.
With this proposal, stack shrinking must happen only at cooperative
preemption points.
One approach is to have stack scanning mark stacks that should be
shrunk, but defer the shrinking until the next cooperative preemption.
### Debugger call injection
Currently, debugger call injection support depends on the liveness
maps emitted at every instruction by Go 1.11.
This is necessary in case a GC or stack growth happens during an
injected call.
If we remove these, the runtime will need a different approach to call
injection.
One possibility is to leave the interrupted frame in a "conservative"
state, and to start a new stack allocation for the injected call.
This way, if the stack needs to be grown during the injected call,
only the stack below the call injection needs to be moved, and the
runtime will have precise liveness information for this region of the
stack.
Another possibility is for the injected call to start on a new
goroutine, though this complicates passing stack pointers from a
stopped frame, which is likely to be a common need.
### Scheduler preemptions
We will focus first on STW and stack scan preemptions, since these are
where cooperative preemption is more likely to cause issues in
production code.
However, it's worth considering how this mechanism can be used for
scheduler preemptions.
Preemptions for STWs and stack scans are temporary, and hence cause
little trouble if a pointer's lifetime is extended by conservative
scanning.
Scheduler preemptions, on the other hand, last longer and hence may
keep leaked pointers live for longer, though they are still bounded.
Hence, we may wish to bias scheduler preemptions toward cooperative
preemption.
Furthermore, since a goroutine that has been preempted
non-cooperatively must record its complete register set, it requires
more state than a cooperatively-preempted goroutine, which only needs
to record a few registers.
STWs and stack scans can cause at most GOMAXPROCS goroutines to be in
a preempted state simultaneously, while scheduler preemptions could in
principle preempt all runnable goroutines, and hence require
significantly more space for register sets.
The simplest way to implement this is to leave preempted goroutines in
the signal handler, but that would consume an OS thread for each
preempted goroutine, which is probably not acceptable for scheduler
preemptions.
Barring this, the runtime needs to explicitly save the relevant
register state.
It may be possible to store the register state on the stack of the
preempted goroutine itself, which would require no additional memory
or resources like OS threads.
If this is not possible, this is another reason to bias scheduler
preemptions toward cooperative preemption.
## Compatibility
This proposal introduces no new APIs, so it is Go 1 compatible.
## Implementation
Austin Clements plans to implement this proposal for Go 1.13. The
rough implementation steps are:
1. Make stack shrinking occur synchronously, decoupling it from stack
scanning.
2. Implement conservative frame scanning support.
3. Implement general support for asynchronous injection of calls using
non-cooperative preemption.
4. Use asynchronous injection to inject STW and stack scan operations.
5. Re-implement debug call injection to not depend on liveness maps.
6. Remove liveness maps except at call sites.
7. Implement non-cooperative scheduler preemption support.
| 39.95977 | 86 | 0.810729 | eng_Latn | 0.999288 |
f478f43742709d9a783a74ea1176685be57e4f96 | 219 | md | Markdown | README.md | ytn86/pycap | a0734dfcc9d1afa299436d830509d8ede0d1c790 | [
"MIT"
] | 1 | 2017-03-13T11:20:15.000Z | 2017-03-13T11:20:15.000Z | README.md | ytn86/pycap | a0734dfcc9d1afa299436d830509d8ede0d1c790 | [
"MIT"
] | null | null | null | README.md | ytn86/pycap | a0734dfcc9d1afa299436d830509d8ede0d1c790 | [
"MIT"
] | null | null | null | # pycap
A simple Packet Capture & Parser. (under construction)
### example
```shell-session
$ sudo python example.py eth1
srcIP : 192.168.2.10
dstIP : 8.8.8.8
ICMP Type: 8
ICMP Code: 0
```
### License
MIT License | 11.526316 | 54 | 0.675799 | eng_Latn | 0.388158 |
f47919b435ffe5c8a9b671284536f0f118c4c2d9 | 49 | md | Markdown | _pages/01_accueil.md | ladgeneration/ladgeneration.github.io | b215a8701d2a9cd8f777f31b95d3798be3b415a1 | [
"MIT"
] | null | null | null | _pages/01_accueil.md | ladgeneration/ladgeneration.github.io | b215a8701d2a9cd8f777f31b95d3798be3b415a1 | [
"MIT"
] | null | null | null | _pages/01_accueil.md | ladgeneration/ladgeneration.github.io | b215a8701d2a9cd8f777f31b95d3798be3b415a1 | [
"MIT"
] | 1 | 2021-05-21T16:00:26.000Z | 2021-05-21T16:00:26.000Z | ---
layout: page
title: Accueil
permalink: /
---
| 8.166667 | 14 | 0.632653 | eng_Latn | 0.681987 |
f47a26b6a8e1009ec44be74e34df344202d1a8c4 | 412 | md | Markdown | fetch_tasks/README.md | GT-RAIL/assistance_arbitration | 84d7cfb6e08f0dd23de9fa106264726f19ef82ea | [
"MIT"
] | null | null | null | fetch_tasks/README.md | GT-RAIL/assistance_arbitration | 84d7cfb6e08f0dd23de9fa106264726f19ef82ea | [
"MIT"
] | 32 | 2018-09-11T12:34:06.000Z | 2020-08-25T19:57:26.000Z | fetch_tasks/README.md | GT-RAIL/assistance_arbitration | 84d7cfb6e08f0dd23de9fa106264726f19ef82ea | [
"MIT"
] | 2 | 2020-02-21T03:16:41.000Z | 2021-08-01T17:29:43.000Z | # Fetch Tasks
**NOTE**: Some of the code here is now broken because of updates to the external `isolation` package.
An example package using the executive system described in this repository. Contains primitive actions, etc. that can be used on the Fetch robot without needing to run any additional nodes.
By default the code in this repository is setup to perform the task described in [TODO-AAAI-Paper](#).
| 51.5 | 189 | 0.781553 | eng_Latn | 0.999739 |
f47a7292c6a2905e49676334a16eaced84a99576 | 1,751 | md | Markdown | leetcode/2019-09-16-construct-target-array-with-multiple-sums.md | awesee/awesee.github.io | 911032952f110bdaeac734cd52ed5b413e272ba7 | [
"MIT"
] | 2 | 2021-11-27T10:48:08.000Z | 2022-03-28T03:51:14.000Z | leetcode/2019-09-16-construct-target-array-with-multiple-sums.md | awesee/awesee.github.io | 911032952f110bdaeac734cd52ed5b413e272ba7 | [
"MIT"
] | null | null | null | leetcode/2019-09-16-construct-target-array-with-multiple-sums.md | awesee/awesee.github.io | 911032952f110bdaeac734cd52ed5b413e272ba7 | [
"MIT"
] | null | null | null | ---
layout: single
title: "多次求和构造目标数组"
date: 2019-09-16 21:30:00 +0800
categories: [Leetcode]
tags: [Array, Heap (Priority Queue)]
permalink: /problems/construct-target-array-with-multiple-sums/
---
## 1354. 多次求和构造目标数组 (Hard)
{% raw %}
<p>给你一个整数数组 <code>target</code> 。一开始,你有一个数组 <code>A</code> ,它的所有元素均为 1 ,你可以执行以下操作:</p>
<ul>
<li>令 <code>x</code> 为你数组里所有元素的和</li>
<li>选择满足 <code>0 <= i < target.size</code> 的任意下标 <code>i</code> ,并让 <code>A</code> 数组里下标为 <code>i</code> 处的值为 <code>x</code> 。</li>
<li>你可以重复该过程任意次</li>
</ul>
<p>如果能从 <code>A</code> 开始构造出目标数组 <code>target</code> ,请你返回 True ,否则返回 False 。</p>
<p> </p>
<p><strong>示例 1:</strong></p>
<pre><strong>输入:</strong>target = [9,3,5]
<strong>输出:</strong>true
<strong>解释:</strong>从 [1, 1, 1] 开始
[1, 1, 1], 和为 3 ,选择下标 1
[1, 3, 1], 和为 5, 选择下标 2
[1, 3, 5], 和为 9, 选择下标 0
[9, 3, 5] 完成
</pre>
<p><strong>示例 2:</strong></p>
<pre><strong>输入:</strong>target = [1,1,1,2]
<strong>输出:</strong>false
<strong>解释:</strong>不可能从 [1,1,1,1] 出发构造目标数组。
</pre>
<p><strong>示例 3:</strong></p>
<pre><strong>输入:</strong>target = [8,5]
<strong>输出:</strong>true
</pre>
<p> </p>
<p><strong>提示:</strong></p>
<ul>
<li><code>N == target.length</code></li>
<li><code>1 <= target.length <= 5 * 10^4</code></li>
<li><code>1 <= target[i] <= 10^9</code></li>
</ul>
{% endraw %}
### 相关话题
[[数组](https://github.com/awesee/leetcode/tree/main/tag/array/README.md)]
[[堆(优先队列)](https://github.com/awesee/leetcode/tree/main/tag/heap-priority-queue/README.md)]
---
## [解法](https://github.com/awesee/leetcode/tree/main/problems/construct-target-array-with-multiple-sums)
| 25.376812 | 188 | 0.628212 | yue_Hant | 0.224653 |
f47af0c7c2b6b81cdb75ef07a62c1115f8327f1e | 2,654 | md | Markdown | README.md | nschnepple19/personal-website-project | b84b2530d73aa8d0c6e200ecac7efb63abfcfd2a | [
"Apache-2.0"
] | null | null | null | README.md | nschnepple19/personal-website-project | b84b2530d73aa8d0c6e200ecac7efb63abfcfd2a | [
"Apache-2.0"
] | null | null | null | README.md | nschnepple19/personal-website-project | b84b2530d73aa8d0c6e200ecac7efb63abfcfd2a | [
"Apache-2.0"
] | null | null | null | # personal-website-project
Personal Website Project
This is my Personal Website project that will be used as a professional portfolio..
## PWP Feedback
### Milestone 1
#### Feedback:
Amazing job on your PWP Milestone 1 submission. Your purpose, audience, and goals are well defined and meet the scope of the project. Your persona is well formed and seems like a real person. The user-story is simple and straight to the point. You also did a great job of defining the tech that your persona uses. I ran your site through https://validator.w3.org/ to check for errors in your HTML is error free. Finally, your directory structure and the .gitignore file was setup according to project specifications.
#### Grades: Tier IV
### Milestone 2a
Good work on these wireframes and accompanying content strategy - your design ideas are clear and concise. One thing you might want to think about is the tone of your page and how it will speak to the visitors of your site. When I say this, think about your use of words and how you want them to flow through your content. Just something to note for your growth as a web dev. Nonetheless, everything you have here in your content strategy will help you in the development phase ahead.
For the portfolio section of your site I would suggest using an interactive gallery such as [FancyBox](http://fancyapps.com/fancybox/3/)
You might also want to take a look at this here if you're also looking at making your photos have some animated: https://tympanus.net/Development/HoverEffectIdeas/index.html
Additional recommendations:
* [Google Fonts](https://fonts.google.com/) for custom typography - it has ~1000 fonts and is easy to integrate.
* [Bootstrap](https://getbootstrap.com/) will have all the documentation you need for generating a quick layout for your site.
* [FontAwesome](https://fontawesome.com/) is a good quality free library for including iconography.
* [Animate CSS](https://daneden.github.io/animate.css/) animation library for CSS
You might want to read ahead regarding the contact form integration, and that documentation is here: https://bootcamp-coders.cnm.edu/class-materials/jquery-validated-captcha-form/
Your Milestone 2a passes at Tier IV. You are now clear to begin development on your PWP. Looking forward to seeing your project take shape!
We'll be building PWP in a file named index.php inside of /public_html. Please note that no frontend-facing site files should live outside of the /public_html directory. Remember use an organized and standards-compliant directory structure to house all images, JavaScript, CSS, etc. We are done with the /documentation directory for now!
| 82.9375 | 516 | 0.788998 | eng_Latn | 0.998277 |
f47b369f9db1a675709d9c8c75492109ba08a314 | 8,260 | md | Markdown | articles/service-fabric/service-fabric-application-runas-security.md | CosmoHuang/azure-docs.zh-cn | f4707b0ff0677024bab25636b163d1e3d8bed3e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-application-runas-security.md | CosmoHuang/azure-docs.zh-cn | f4707b0ff0677024bab25636b163d1e3d8bed3e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-application-runas-security.md | CosmoHuang/azure-docs.zh-cn | f4707b0ff0677024bab25636b163d1e3d8bed3e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 在系统和本地安全帐户下运行 Azure Service Fabric 服务 | Microsoft Docs
description: 了解如何在系统和本地安全帐户下运行 Service Fabric 应用程序。 创建安全主体并应用运行方式策略以安全运行你的服务。
services: service-fabric
documentationcenter: .net
author: msfussell
manager: timlt
editor: ''
ms.assetid: 4242a1eb-a237-459b-afbf-1e06cfa72732
ms.service: service-fabric
ms.devlang: dotnet
ms.topic: conceptual
ms.tgt_pltfrm: NA
ms.workload: NA
ms.date: 03/29/2018
ms.author: mfussell
ms.openlocfilehash: 33ca23834f35e631c6943ec22a88f4fe3dc853e1
ms.sourcegitcommit: eb75f177fc59d90b1b667afcfe64ac51936e2638
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/16/2018
---
# <a name="run-a-service-as-a-local-user-account-or-local-system-account"></a>以本地用户帐户或本地系统帐户运行服务
使用 Azure Service Fabric,可以保护群集中以不同用户帐户运行的应用程序。 默认情况下,Service Fabric 应用程序在运行 Fabric.exe 程序的帐户之下运行。 Service Fabric 还提供了在本地用户或系统帐户下运行应用程序的功能。 受支持的本地系统帐户类型为 **LocalUser**、**NetworkService**、**LocalService** 和 **LocalSystem**。 如果在 Windows 独立群集上运行 Service Fabric,可以使用 [Active Directory 域帐户](service-fabric-run-service-as-ad-user-or-group.md)或[组托管服务帐户](service-fabric-run-service-as-gmsa.md)运行服务。
在应用程序清单中,在 **Principals** 部分中定义运行服务或保护资源时所需的用户帐户。 还可以定义并创建用户组,以便统一管理一个或多个用户。 如果不同的服务入口点有多个用户,而且这些用户需要拥有可在组级别使用的常用权限,则这种做法特别有用。 然后,可以在 RunAs 策略中引用用户,该策略应用于应用程序中的特定服务或所有服务。
默认情况下,RunAs 策略应用于主入口点。 如果需要[在系统帐户下运行特定的高权限设置操作](service-fabric-run-script-at-service-startup.md),则还可以将 RunAs 策略应用于安装程序入口点,或者同时应用于主入口点和安装程序入口点。
> [!NOTE]
> 如果将 RunAs 策略应用到服务,且服务清单使用 HTTP 协议声明终结点资源,则必须指定 SecurityAccessPolicy。 有关详细信息,请参阅[为 HTTP 和 HTTPS 终结点分配安全访问策略](service-fabric-assign-policy-to-endpoint.md)。
>
## <a name="run-a-service-as-a-local-user"></a>以本地用户身份运行服务
可以创建一个本地用户,用于帮助保护应用程序中的服务。 当在应用程序清单的主体部分中指定 **LocalUser** 帐户类型时,Service Fabric 在部署应用程序的计算机上创建本地用户帐户。 默认情况下,这些帐户的名称与应用程序清单中指定的名称不相同(例如,以下应用程序清单示例中的 *Customer3*)。 相反,它们是动态生成的并带有随机密码。
在 **ServiceManifestImport** 的 **RunAsPolicy** 部分中,从 **Principals** 部分中指定用来运行服务代码包的用户帐户。 以下示例展示了如何创建本地用户并将 RunAs 策略应用于主入口点:
```xml
<?xml version="1.0" encoding="utf-8"?>
<ApplicationManifest xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ApplicationTypeName="Application7Type" ApplicationTypeVersion="1.0.0" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<Parameters>
<Parameter Name="Web1_InstanceCount" DefaultValue="-1" />
</Parameters>
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="Web1Pkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides />
<Policies>
<RunAsPolicy CodePackageRef="Code" UserRef="Customer3" EntryPointType="Main" />
</Policies>
</ServiceManifestImport>
<DefaultServices>
<Service Name="Web1" ServicePackageActivationMode="ExclusiveProcess">
<StatelessService ServiceTypeName="Web1Type" InstanceCount="[Web1_InstanceCount]">
<SingletonPartition />
</StatelessService>
</Service>
</DefaultServices>
<Principals>
<Users>
<User Name="Customer3" />
</Users>
</Principals>
</ApplicationManifest>
```
## <a name="create-a-local-user-group"></a>创建本地用户组
可以创建用户组并向组中添加一个或多个用户。 如果不同的服务入口点对应有多个用户,而且这些用户需要拥有特定的常见组级别权限,这种做法就特别有用。 下面的应用程序清单示例展示了一个名为 *LocalAdminGroup* 的具有管理员权限的本地组。 *Customer1* 和 *Customer2* 这两个用户已成为此本地组的成员。 在 **ServiceManifestImport** 部分中,应用了一个 RunAs 策略来以 *Customer2* 身份运行 *Stateful1Pkg* 代码包。 应用了另一个 RunAs 策略来以 *Customer1* 身份运行 *Web1Pkg* 代码包。
```xml
<?xml version="1.0" encoding="utf-8"?>
<ApplicationManifest xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ApplicationTypeName="Application7Type" ApplicationTypeVersion="1.0.0" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<Parameters>
<Parameter Name="Stateful1_MinReplicaSetSize" DefaultValue="3" />
<Parameter Name="Stateful1_PartitionCount" DefaultValue="1" />
<Parameter Name="Stateful1_TargetReplicaSetSize" DefaultValue="3" />
<Parameter Name="Web1_InstanceCount" DefaultValue="-1" />
</Parameters>
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="Stateful1Pkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides />
<Policies>
<RunAsPolicy CodePackageRef="Code" UserRef="Customer2" EntryPointType="Main"/>
</Policies>
</ServiceManifestImport>
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="Web1Pkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides />
<Policies>
<RunAsPolicy CodePackageRef="Code" UserRef="Customer1" EntryPointType="Main"/>
</Policies>
</ServiceManifestImport>
<DefaultServices>
<Service Name="Stateful1" ServicePackageActivationMode="ExclusiveProcess">
<StatefulService ServiceTypeName="Stateful1Type" TargetReplicaSetSize="[Stateful1_TargetReplicaSetSize]" MinReplicaSetSize="[Stateful1_MinReplicaSetSize]">
<UniformInt64Partition PartitionCount="[Stateful1_PartitionCount]" LowKey="-9223372036854775808" HighKey="9223372036854775807" />
</StatefulService>
</Service>
<Service Name="Web1" ServicePackageActivationMode="ExclusiveProcess">
<StatelessService ServiceTypeName="Web1Type" InstanceCount="[Web1_InstanceCount]">
<SingletonPartition />
</StatelessService>
</Service>
</DefaultServices>
<Principals>
<Groups>
<Group Name="LocalAdminGroup">
<Membership>
<SystemGroup Name="Administrators" />
</Membership>
</Group>
</Groups>
<Users>
<User Name="Customer1">
<MemberOf>
<Group NameRef="LocalAdminGroup" />
</MemberOf>
</User>
<User Name="Customer2">
<MemberOf>
<Group NameRef="LocalAdminGroup" />
</MemberOf>
</User>
</Users>
</Principals>
</ApplicationManifest>
```
## <a name="apply-a-default-policy-to-all-service-code-packages"></a>将默认策略应用到所有服务代码包
**DefaultRunAsPolicy** 部分用于针对未定义特定 **RunAsPolicy** 的所有代码包指定默认用户帐户。 如果在应用程序所用的服务清单中指定的大多数代码包必须以同一用户运行,则应用程序可以只定义该用户帐户的默认 RunAs 策略。 以下示例指定如果代码包未指定 **RunAsPolicy**,则此代码包应该以 principals 部分中指定的 **MyDefaultAccount** 用户身份运行。 支持的帐户类型为 LocalUser、NetworkService、LocalSystem 和 LocalService。 如果使用本地用户或服务,则还需要指定帐户名称和密码。
```xml
<?xml version="1.0" encoding="utf-8"?>
<ApplicationManifest xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ApplicationTypeName="Application7Type" ApplicationTypeVersion="1.0.0" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<Parameters>
<Parameter Name="Web1_InstanceCount" DefaultValue="-1" />
</Parameters>
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="Web1Pkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides />
</ServiceManifestImport>
<DefaultServices>
<Service Name="Web1" ServicePackageActivationMode="ExclusiveProcess">
<StatelessService ServiceTypeName="Web1Type" InstanceCount="[Web1_InstanceCount]">
<SingletonPartition />
</StatelessService>
</Service>
</DefaultServices>
<Principals>
<Users>
<User Name="MyDefaultAccount" AccountType="NetworkService" />
</Users>
</Principals>
<Policies>
<DefaultRunAsPolicy UserRef="MyDefaultAccount" />
</Policies>
</ApplicationManifest>
```
## <a name="debug-a-code-package-locally-using-console-redirection"></a>使用控制台重定向在本地调试代码包
有时,查看正在运行的服务的控制台输出对于调试很有帮助。 可以在服务清单中的入口点上设置控制台重定向策略,以便将输出写入到文件。 文件输出将写入到部署和运行应用程序的群集节点上名为 **log** 的应用程序文件夹中。
> [!WARNING]
> 永远不要在生产中部署的应用程序中使用控制台重定向策略,因为这可能会影响应用程序故障转移。 *仅*将其用于本地开发和调试目的。
>
>
下面的服务清单示例展示了如何使用 FileRetentionCount 值启用控制台重定向:
```xml
<CodePackage Name="Code" Version="1.0.0">
<EntryPoint>
<ExeHost>
<Program>VotingWeb.exe</Program>
<WorkingFolder>CodePackage</WorkingFolder>
<ConsoleRedirection FileRetentionCount="10"/>
</ExeHost>
</EntryPoint>
</CodePackage>
```
<!--Every topic should have next steps and links to the next logical set of content to keep the customer engaged-->
## <a name="next-steps"></a>后续步骤
* [了解应用程序模型](service-fabric-application-model.md)
* [在服务清单中指定资源](service-fabric-service-manifest-resources.md)
* [部署应用程序](service-fabric-deploy-remove-applications.md)
[image1]: ./media/service-fabric-application-runas-security/copy-to-output.png
| 43.246073 | 390 | 0.744068 | yue_Hant | 0.902079 |
f47b3cf7973b277ae495dde06802432ae1b681d4 | 596 | md | Markdown | GOVERNANCE.md | eliseemond/elastix | 0e8572f4a315e0a8f08b07d5947b4f3ac160b575 | [
"Apache-2.0"
] | 318 | 2017-05-22T11:39:46.000Z | 2022-03-27T04:40:13.000Z | GOVERNANCE.md | eliseemond/elastix | 0e8572f4a315e0a8f08b07d5947b4f3ac160b575 | [
"Apache-2.0"
] | 358 | 2017-05-22T11:36:05.000Z | 2022-03-18T15:49:10.000Z | GOVERNANCE.md | eliseemond/elastix | 0e8572f4a315e0a8f08b07d5947b4f3ac160b575 | [
"Apache-2.0"
] | 102 | 2017-05-22T11:38:44.000Z | 2021-12-23T20:27:51.000Z | # Governance of `elastix` #
The elastix project is still under governance and supervision of the original lead developers Marius Staring and Stefan Klein.
Nowadays many subprojects of elastix exist and the supervision of these projects is also in hands of several other developers/scientists.
The subprojects and their main other supervisors are:
* `elastix`: Niels Dekker
* `elastix model zoo`: Viktor van der Valk
* `elastix_napari`: Viktor van der Valk
* `elastix docker image`: Viktor van der Valk
* `SimpleElastix`: Kasper Marstal
* `ITKElastix`: Matt McCormick, Dženan Zukić
| 42.571429 | 137 | 0.771812 | eng_Latn | 0.958587 |
f47b59c2837a813f992b26e6423b3383249ffaeb | 4,777 | md | Markdown | wdk-ddi-src/content/compstui/ns-compstui-_optparam.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/compstui/ns-compstui-_optparam.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/compstui/ns-compstui-_optparam.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:compstui._OPTPARAM
title: "_OPTPARAM"
author: windows-driver-content
description: An array of OPTPARAM structures is used by CPSUI applications (including printer interface DLLs) for describing all the parameter values associated with a property sheet option. The array's address is included in an OPTTYPE structure.
old-location: print\optparam.htm
old-project: print
ms.assetid: d0cd2867-783c-4a41-a819-e919d4ffc1e3
ms.author: windowsdriverdev
ms.date: 2/26/2018
ms.keywords: "*POPTPARAM, OPTPARAM, OPTPARAM structure [Print Devices], POPTPARAM, POPTPARAM structure pointer [Print Devices], _OPTPARAM, compstui/OPTPARAM, compstui/POPTPARAM, cpsuifnc_1c22c283-993e-45d7-b0c7-1148eafeb13c.xml, print.optparam"
ms.prod: windows-hardware
ms.technology: windows-devices
ms.topic: struct
req.header: compstui.h
req.include-header: Compstui.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- compstui.h
api_name:
- OPTPARAM
product: Windows
targetos: Windows
req.typenames: OPTPARAM, *POPTPARAM
---
# _OPTPARAM structure
## -description
An array of OPTPARAM structures is used by CPSUI applications (including printer interface DLLs) for describing all the parameter values associated with a <a href="https://msdn.microsoft.com/572330d6-1a1b-46fd-bfb4-be2b0990bca4">property sheet option</a>. The array's address is included in an <a href="..\compstui\ns-compstui-_opttype.md">OPTTYPE</a> structure.
## -syntax
````
typedef struct _OPTPARAM {
WORD cbSize;
BYTE Flags;
BYTE Style;
LPTSTR pData;
ULONG_PTR IconID;
LPARAM lParam;
ULONG_PTR dwReserved[2];
} OPTPARAM, *POPTPARAM;
````
## -struct-fields
### -field cbSize
Size, in bytes, of the OPTPARAM structure.
### -field Flags
Optional bit flags that modify the parameter's characteristics. The following flags can be set in any combination:
#### OPTPF_DISABLED
If set, the parameter is not user-selectable. Can be used with the following option types:
<a href="https://msdn.microsoft.com/library/windows/hardware/ff562825">TVOT_2STATES</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff562827">TVOT_3STATES</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff562833">TVOT_COMBOBOX</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff562839">TVOT_LISTBOX</a>
#### OPTPF_HIDE
If set, the parameter not displayed in the treeview. Can be used with the following option types:
<a href="https://msdn.microsoft.com/library/windows/hardware/ff562827">TVOT_3STATES</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff562833">TVOT_COMBOBOX</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff562839">TVOT_LISTBOX</a>
#### OPTPF_ICONID_AS_HICON
If set, the <b>IconID</b> member contains an icon handle.
If not set, the <b>IconID</b> member contains an icon resource identifier.
#### OPTPF_OVERLAY_NO_ICON
If set, CPSUI overlays its IDI_CPSUI_NO icon onto the icon identified by the <b>IconID</b> member.
#### OPTPF_OVERLAY_STOP_ICON
If set, CPSUI overlays the IDI_CPSUI_STOP icon onto the icon identified by the <b>IconID</b> member.
#### OPTPF_OVERLAY_WARNING_ICON
If set, CPSUI overlays its IDI_CPSUI_WARNING icon onto the icon identified by the <b>IconID</b> member.
#### OPTPF_USE_HDLGTEMPLATE
If set, <b>lParam</b> contains a template handle.
If not set, <b>lParam</b> contains a template resource identifier.
(Used only if <b>Style</b> is PUSHBUTTON_TYPE_DLGPROC.)
### -field Style
Push button style, used only for the <a href="https://msdn.microsoft.com/library/windows/hardware/ff562844">TVOT_PUSHBUTTON</a> option type.
### -field pData
Pointer to the parameter's value. Use of this member is dependent on the <a href="https://msdn.microsoft.com/3b3c002c-a201-4f81-b208-30864343409b">CPSUI option type</a>.
### -field IconID
Usually identifies the icon to be associated with the option parameter, but is sometimes used for other purposes. Use of this member is dependent on the <a href="https://msdn.microsoft.com/3b3c002c-a201-4f81-b208-30864343409b">CPSUI option type</a>.
### -field lParam
Use of this member is dependent on the <a href="https://msdn.microsoft.com/3b3c002c-a201-4f81-b208-30864343409b">CPSUI option type</a>.
### -field dwReserved
Reserved, must be initialized to zero.
## -remarks
If the OPTPF_HIDE flag is set in all the OPTPARAM structures associated with an option, CPSUI hides the entire option.
| 22.856459 | 362 | 0.75403 | eng_Latn | 0.54112 |
f47c8f6e214c1a76c738b361c7b91ead462fd041 | 4,185 | md | Markdown | README.md | ISmallFish/IRM-based-Speech-Enhancement-using-LSTM | 7fdd2039bdcec4add646c52b138916e8e32501b6 | [
"MIT"
] | 75 | 2019-11-22T11:36:48.000Z | 2022-03-20T10:00:31.000Z | README.md | ISmallFish/IRM-based-Speech-Enhancement-using-LSTM | 7fdd2039bdcec4add646c52b138916e8e32501b6 | [
"MIT"
] | 3 | 2020-01-09T09:38:58.000Z | 2022-03-16T09:38:43.000Z | README.md | zzc2/IRM-based-Speech-Enhancement-using-LSTM | 7fdd2039bdcec4add646c52b138916e8e32501b6 | [
"MIT"
] | 23 | 2019-12-10T05:16:03.000Z | 2022-01-28T11:08:50.000Z | # IRM based Speech Enhancement using LSTM
基于理想浮值掩蔽(Ideal Ratio Mask,IRM)使用 LSTM 进行语音增强。
![audio](./doc/tensorboard_scalar.png)
![audio](./doc/tensorboard_spectrogram.png)
## 准备
- Python 3.7.x
- CUDA 10.1
- Pytorch 1.3
```shell
conda install tensorboard
pip install matplotlib librosa pystoi json5
pip install https://github.com/vBaiCai/python-pesq/archive/master.zip
```
## 使用方法
```shell script
git clone https://github.com/haoxiangsnr/UNetGAN.git
```
### 训练(train.py)
使用 `train.py` 训练模型,接收如下参数:
- `-h`,显示帮助信息
- `-C`, `--config`, 指定训练相关的配置文件,通常存放于`config/train/`目录下,配置文件拓展名为`json5`
- `-R`, `--resume`, 从最近一次保存的模型断点处继续训练
语法:`python train.py [-h] -C CONFIG [-R]`,例如:
```shell script
python train.py -C config/20121212_noALoss.json5
# 训练模型所用的配置文件为 config/20121212_noALoss.json5
# 默认使用所有的GPU训练
CUDA_VISIBLE_DEVICES=1,2 python train.py -C config/20121212_noALoss.json5
# 训练模型所用的配置文件为 config/20121212_noALoss.json5
# 使用索引为1、2的GPU训练
CUDA_VISIBLE_DEVICES=1,2 python train.py -C config/20121212_noALoss.json5 -R
# 训练模型所用的配置文件为 config/20121212_noALoss.json5
# 使用索引为1、2的GPU训练
# 从最近一次保存的模型断点处继续训练
```
补充说明:
- 一般训练所需要的配置文件名都放置在`config/train/`目录下,配置文件拓展名为`json5`
- 配置文件中的参数作用见“参数说明”部分
### 增强(enhancement.py)
TODO
### 测试(test.py)
TODO
### 可视化
训练过程中产生的所有日志信息都会存储在`<config["root_dir"]>/<config filename>/`目录下。这里的`<config["root_dir"]>`指配置文件中的 `root_dir`参数的值,`<config filename>`指配置文件名。
假设用于训练的配置文件为`config/train/sample_16384.json5`,`sample_16384.json`中`root_dir`参数的值为`/home/Exp/UNetGAN/`,那么训练过程中产生的日志文件会存储在 `/home/Exp/UNetGAN/sample_16384/` 目录下。该目录包含以下内容:
- `logs/`: 存储`Tensorboard`相关的数据,包含损失曲线,语音波形,语音文件等
- `checkpoints/`: 存储模型训练过程中产生的所有断点,后续可通过这些断点来重启训练或进行语音增强
- `<Date>.json`文件: 训练时使用的配置文件的备份
## 参数说明
训练,测试与增强都需要指定具体的配置文件。
### 训练参数
训练过程中产生的日志信息会存放在`<config["root_dir"]>/<config filename>/`目录下
```json5
{
"seed": 0, // 为Numpy,PyTorch设置随机种子,保证实验的可重复性
"description": "", // 实验描述
"root_dir": "/media/imucs/DataDisk/haoxiang/Experiment/IRM", // 项目根目录
"cudnn_deterministic": false, // 开启可保证实验的可重复性,但影响性能
"trainer": {
"epochs": 1200, // 实验进行的总轮次
"save_checkpoint_interval": 10, // 存储模型断点的间隔
"validation": {
"interval": 10, // 执行验证的间隔
"find_max": true,
"custom": {
"visualize_audio_limit": 20, // 验证时 Tensorboard 中可视化音频文件的数量
"visualize_waveform_limit": 20, // 验证时 Tensorboard 中可视化语音波形的数量
"visualize_spectrogram_limit": 20, // 验证时 Tensorboard 中可视化语谱图的数量
}
}
},
"generator_model": {
"module": "model.generator_basic_model", // 存放生成器的文件
"main": "GeneratorBasicModel", // 生成器类
"args": {} // 传递给生成器类的参数
},
"model": {
"module": "model.discriminator_basic_model", // 存放判别器的文件
"main": "DiscriminatorBasicModel", // 判别器类
"args": {} // 传递给判别器类的参数
},
"loss_function": {
"module": "util.loss", // 存放生成器额外损失的文件
"main": "mse_loss", // 具体损失函数
"args": {} // 传给该函数的参数
},
"optimizer": {
"lr": 0.0002, // 学习率
"beta1": 0.9, // Adam动量参数1
"beta2": 0.999 // Adam动量参数2
},
"train_dataset": {
"module": "dataset.irm_dataset",
"main": "IRMDataset",
"args": {
// 详见 dataset/*_dataset.py
}
},
"validation_dataset": {
"module": "dataset.irm_dataset",
"main": "IRMDataset",
"args": {
// 详见 dataset/*_dataset.py
}
},
"train_dataloader": {
"batch_size": 400,
"num_workers": 40,
"shuffle": true,
"pin_memory": true
}
}
```
### 验证参数
TODO
### 测试参数
TODO
## 可重复性
我们已经将可以设置随机种子的位置抽离成了可配置的参数,这保证了基本的可重复性。
如果你使用 CuDNN 进行运算加速,还可以进一步设置确定性参数“cudnn_deterministic”为“True”,但这会影响性能。
## 须知
当前项目并非严肃的生产项目,若你发现 bug,请不要犹豫,直接提交 pull request.
## ToDo
- [x] LSTM models
- [x] Training logic
- [x] Validation logic
- [x] Visualization of validation set (waveform, audio, spectrogram, config)
- [x] PESQ and STOI metrics
- [x] Optimize config parameters
- [x] Improve the documentation
- [ ] Test script
- [ ] English version README
## LICENSE
MIT License
Copyright (c) 2019 郝翔
| 23.379888 | 169 | 0.641338 | yue_Hant | 0.573276 |
f47d1fafff41f17f5effd2c4b9ee60018dcb3852 | 51,817 | markdown | Markdown | _posts/2012-07-05-method-and-apparatus-for-connecting-consumers-with-one-or-more-product-or-service-providers.markdown | api-evangelist/patents-2012 | b40b8538ba665ccb33aae82cc07e8118ba100591 | [
"Apache-2.0"
] | 3 | 2018-09-08T18:14:33.000Z | 2020-10-11T11:16:26.000Z | _posts/2012-07-05-method-and-apparatus-for-connecting-consumers-with-one-or-more-product-or-service-providers.markdown | api-evangelist/patents-2012 | b40b8538ba665ccb33aae82cc07e8118ba100591 | [
"Apache-2.0"
] | null | null | null | _posts/2012-07-05-method-and-apparatus-for-connecting-consumers-with-one-or-more-product-or-service-providers.markdown | api-evangelist/patents-2012 | b40b8538ba665ccb33aae82cc07e8118ba100591 | [
"Apache-2.0"
] | 4 | 2018-04-05T13:55:08.000Z | 2022-02-10T11:30:07.000Z | ---
title: Method and apparatus for connecting consumers with one or more product or service providers
abstract: A method and apparatus that for matching consumers with product or service providers is disclosed. The method may include establishing one or more profiles from one or more product or service providers, establishing one or more profiles from one or more consumers, receiving a consumer's subject matter request, matching the consumer with one or more products or service providers based on at least one of the consumer's profile and subject matter request, and providing the consumer's contact information to the matched one or more products or service providers.
url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=08452659&OS=08452659&RS=08452659
owner: Heavy Hammer, Inc.
number: 08452659
owner_city: Annapolis
owner_country: US
publication_date: 20120705
---
This application is a Continuation of U.S. patent application Ser. No. 13 053 869 filed on Mar. 22 2011 which claims priority from U.S. Provisional Application Ser. No. 61 381 235 filed on Sep. 9 2010 which claims priority from U.S. Provisional Application Ser. No. 61 316 695 filed on Mar. 23 2010 the contents of which are incorporated herein in their entireties.
The disclosed embodiments relate to generating a system and interface for connecting a product or service provider and a consumer in search of or in need of such product or service.
Coventional Information sites e.g websites software applications computing devices etc. exist for individuals and consumers to offer goods and services for sale at their standard or discounted price or at a price determined using an auction or variable pricing system. However often consumers spend time seeking a service provider or entity to buy goods or obtain services by searching on line or using a telephone directory. In this manner consumers often have to spend considerable time and effort to contact several service providers via information sites or telephone to obtain an acceptable price and availability from the goods or service providers. They also are forced to search using Internet search engines or generic websites that offer multiple types of products with the hope that a product or service provider who carries the product they are looking for will show up in a search result under the search terms they are looking for and that the product or service provider presented will have the product at the price they are willing to pay and be in a relevant geographic areas.
Additionally Service Providers and product or service providers spend time and resources in an effort to attract consumers for their products and services via advertising. Current advertising methods such as television radio and conventional online advertising while effective in presenting information to large groups of potential buyers based on traditional demographic criteria such as general geograhic location and in the case of internet search engines particular topics of interest indicated by the search they still are not efficient. These methods require the buyer of a product or service provider s products to be exposed to the specific advertising for which that product or service provider has paid. In the scenario of internet advertising a product or service provider s potential consumer perform a search for a term or group of terms that will return that product or service provider as part of a paid or unpaid search result on an internet search engine or visit a particular website displaying advertising from that product or service provider.
In the event that a product or service provider does chose to develop an online advertising campaign the product or service provider must now spend resources to manage that campaign and entice viewers to visit their website through the use of limited advertising text hyperlinks and advertising banners. If the consumer does select the product or service provider s advertisement it is then up to the product or service provider s website to convert the passive searcher into a consumer with effective online lead generation which differs in effectiveness by industry and from product or service provider to product or service provider . Additionally not all current methods of online advertising account for window shoppers or users who click on the wrong link both of which cost the advertiser money. As most product or service providers do not specialize in web design the product or service provider s conversion rate becomes a secondary problem once a potential consumer has selected the product or service provider s website. A high conversion rate can reduce the cost per acquisition while a low conversion rate will increase the cost per acquisition.
Ultimately regardless of the a product or service provider s ability to maintain or afford conventional methods of advertising for purposes of consumer acquisition that advertising is expensive and does not allow the product or service provider to exactly target or reach their preferred clients. Moreover these conventional methods rely on the consumer being exposed to that product or service provider and selecting to work with or purchase from that particular product or service provider which can be cost prohibitive for most small to medium product or service providers.
The less traditional or common place method of client acquisition is through purchasing of Leads. Leads are generally considered consumer contact information and limited information about their interest or needs which have been generated by some source such as a survey contact form or an account created when purchasing some other service . This method while allowing for direct access to consumers does not necessarily ensure that the consumer a is expecting to be contacted by a particular product or service provider b is interested in a product or service providers specific product s or industry c is a client that matches a particular product or service provider s specific criteria or d has not been contacted by numerous other product or service providers who acquired their contact information thereby devaluing any contact by a particular product or service provider. Conversely from the consumer s perspective this method relies on all parties who have access to their contact information to 1 keep it secure 2 ensure their privacy is maintained 3 provide their information to only the people they desire and 4 only share their information with persons or entities they specifically authorize. These conventional methods also require consumers if they are interested in multiple products to produce multiple leads thereby reducing any current practical implementation to little more than online versions of Classified Wanted Ads . Additionally once a consumer has submitted their information they do not necessarily have any knowledge of by whom it is received.
A method and apparatus that for matching consumers with product or service providers is disclosed. The method may include establishing one or more profiles from one or more product or service providers establishing one or more profiles from one or more consumers receiving a consumer s subject matter request matching the consumer with one or more products or service providers based on at least one of the consumer s profile and subject matter request and providing the consumer s contact information to the matched one or more products or service providers.
Additional features and advantages of the disclosed embodiments will be set forth in the description which follows and in part will be obvious from the description or may be learned by practice of the disclosed embodiments. The features and advantages of the disclosed embodiments may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosed embodiments will become more fully apparent from the following description and appended claims or may be learned by the practice of the disclosed embodiments as set forth herein.
Various embodiments of the disclosed embodiments are discussed in detail below. While specific implementations are discussed it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosed embodiments.
The disclosed embodiments may comprise a variety of embodiments such as a method and apparatus and other embodiments that relate to the basic concepts of the disclosed embodiments.
This disclosed embodiments may concern an apparatus and method for providing product or service providers with a targeted and relevant list of potential consumers and enabling product or service providers and consumers to connect directly. More particularly the disclosed embodiments may relate to a Pay for Profile method of consumer targeting and acquisition which reverses the traditional process and allows the consumer to be specific with their desired product or service criteria and therefore the product or service provider can more closely match their offers to the desires of the consumer. Product or service providers may pay for access to a consumers profile that meets their criteria and may not pay for consumers that do not meet their criteria or that have selected the product or service provider accidently while performing a search.
Pay for profile may apply market principles to advertising on the Internet. Conventional Internet applications are passive and do not provide a platform for a product or service provider to take a proactive approach. Conventional internet applications also do not allow consumers to post profiles in order to have product or service providers offer their products or services directly to them in a controlled format. Pay for Profile may enable product or service providers to more accurately target internet users that meet their criteria and focus resources on their best possible consumer. Furthermore a competitive consumer selection process may be based on a cost per acquisition price per click consumer price point number of previous clicks time posted and area of interest to helps ensure that the pricing structure reflects the market and is accessible to product or service providers regardless of their websites efficiency or the product or service provider s particular marketing budget.
Combining consumer generated profiles and lead purchasing is not commonplace. There is a need to provide a method and apparatus which can combine consumer generated profiles and lead purchasing through one s own website or third party website combined with a method and apparatus to charge the product or service provider per access of the profile while also allowing for the possibility of consumer interaction with the profile and maintaining of consumer privacy.
The proposed disclosed system and method may also concern gathering information concerning products and or services needed from a consumer posting that information which may collectively be considered a profile lead or contact information using a public or member based presentation forum e.g. a website application page intranet etc. and then providing commercial entities or other consumers the ability to access all or portions of the profile to offer their goods and or services through the intermediary presenter e.g. the website application page intranet etc. to review data for consumer research purposes and to purchase contact information for the purpose of resale.
Consumer profiles may be created and updated automatically or by the consumer through an interaction with an application or website. Profiles may not be limited to contact information of a single individual or entity.
The profile could contain both default or automatically generated values for any particular information contained in the profile. The profile may contain multiple sub profiles within it. Each sub profile may contain information unique to that sub profile or duplicate information inherited from other profiles sub profiles and parent profile. A profile may contain any information relevant to a consumer or a potential product or service provider. Examples of information contained in the profile may be contact information of the consumer geographic information relevant to the consumer parameters regarding product or service price features brands color manufacture safety ratings and consumer ratings the general description of a desired product how many times a profile may be accesses and particular product or service providers or distributors that may or may not be authorized to contact this consumer. Profiles may also contain the source of the profile system information about the consumer such as the consumer s operating system OS or internet protocol IP address history of websites visited and any other information that maybe collected automatically during creation of the profile.
Profiles may also be associated with particular channels which could include industries particular categories of products or manufactures.
Consumer may update all or part of the information in a profile. Examples of information consumers may change within a profile are narrowing or expanding of product criteria general description channels with which to associate their profiles number of clicks their profile may receive number of views their profile or portion of a profile may receive where and when and to whom the profile is presented as well as any related privacy setting such as name or contact information available to product or service provider.
The consumer may pay be paid receive discounts points coupons or any other monitory or non monitory incentive for creating or interacting with a profile. Or the service may be free to the consumer and they may receive no benefit for creating a profile.
The consumers who are associated with profiles may also be compensated in the form of a discount when they purchase a product or service through a particular retailer. This type of special discount may also leverage the collective buying power of groups. Either user created or determined by the respective information site consumers who are all interested in the same product could all receive select discounts from product or service providers who offer incentives based on the number of people interested in buying at that time.
Product or service providers may not be exclusively limited to individuals or entities selling products and could include any entity individual program code that accesses a profile.
The product or service provider may pay be paid participate as a member of a collective or the service may be free to the product or service provider. Membership may be required or the service may be a free or open source platform that allows product or service providers or users access to all or some of the profiles.
The product or service providers may create an account in the system however an account may not be required. Product or service providers may access and use their account to select profiles to view based on search terms that are relevant to the products and services offered by the product or service provider a particular budget they have established for selecting profiles or other criteria unrelated to the products they offer but related to purchasers of their products such as gender.
In would be assumed that the less time that passes between when a consumers requests a product or service and when the product or service provider accesses the profile of that consumer to offer them products the greater the chances of doing business with that consumer. For this reason the product or service provider may also be able to setup methods for automatically accessing profiles that meet a particular set of criteria or could integrate accessing of profiles with any other consumer targeting systems already in use by a product or service provider such as auto responder emails.
Product or service providers may also set up daily budgets with filter criteria similar to a CPC campaign budget where filters are set and all profiles are automatically accessed or available to be select that meet their product or service provider s criteria until the budget is exhausted for example.
Product or service provider profiles may also store previously accessed consumer profiles for reference tracking or client management purposes. This could also be used to ensure product or service providers are not charged for accessing the same profile over again unless it was updated.
Product or service providers may access profiles through any number of interfaces or platforms such as website computer applications or mobile devices software application RSS feed or any program code designed to retrieve profiles or portions of information included within a profile.
Upon access of a profile a product or service provider may be billed or charged. Upon access of the profile the product or service provider may be provided with or provided a method to access the available portion of the accessed profile based on the platform through which it was accessed or other criteria determined by the system and information inside the profile.
Information provided from the profile to a product or service provider may be provided immediately or could be stored in or associated with the Product or service provider s Account for later retrieval or other cross reference proposes.
The product or service provider s selection of a profile may result in an access request being sent to the system for the consumer s profile and a response by the system transmitting the consumers profile information to the product or service provider.
The product or service provider may only be charged when the consumer is selected by the product or service provider either through a manual selection process an automated selection process with the product or service providers preset criteria an automated process based on criteria set by the system or any combination of methods or means by which the system is able to match a consumer profile with a product or service provider.
One possible method of monetizing this system may be by implementing a Pay per Click PPC system. However rather than costing the advertiser some amount of money every time a person e.g. the person seeking goods and or services in this case otherwise known as the consumer clicks on a promotional link the person offering to provide the goods and or services a professional may be charged per profile accessed.
One possible method for determining amount of money charged to product or service providers per profile accessed may be assigning a value to each profile based on general or specific criteria. Examples that could determine or be combined to determine criteria could be number previous clicks or times the profile or portion of the profile was accessed geographic location of the consumer geographic location of the consumer relative to the product or service provider date of the consumer s request last time the profile was updated specific product or service providers that have previously accessed the profile relative to the current product or service provider exclusive vs. non exclusive the particular product or service provider accessing the profile criteria set by the system criteria set by the user. Furthermore valuation may be influenced by profiles previously contacted by the product or service provider or the proportional benefit the product or service provider could receive based on the benefit the product or service provider would see from a transaction with this consumer.
This system may also be developed to use a Cost Per Acquisition CPA Period Based Subscription Auction Fixed Fee Commodity Market or any other model or combination of models suitable for accountability and revenue generation
Profiles may be searched and accessed via multiple platforms that interface with the system. Searches may be performed based on any number of criteria or methods. Searches may not be limited to manual input of criteria by a user. Searches may be preformed automatically based on manual input of criteria by a user program code previously defined criteria of a user or the platform in which they are displayed prior to access such as within a list of results from an internet search engine .
One possible disclosed embodiment may access the system via a website search available profiles and display resulting matching profiles based on a search term selected by a product or service provider. Further embodiments may include returning profiles based on
Search results may be listed and arranged in order of relevance amount consumer is willing to pay amount and quality of information provided and time request was made geographic relevance and search term relevance with the search listing corresponding to the higher of any one or a combination of these variables. For example page ranking can be performed on a system wide basis or on a product or service provider specified basis. One exemplary page ranking technique for a real estate system can utilize the following weighting such that the results are then sorted by final score highest to lowest .
While the above ranking system utilizes a number of disclosed factors and weights to create a result score those of skill in the relevant art will understand that other weights and factors can be used instead. For example whether a property is to be bought as a first home or an investment property can be used. Additionally points can be assigned added or subtracted on other financial factors as well such as the consumer s income level the price the consumer has indicated they are willing to pay for such products or services any discount that the consumer is requesting on the product service and or the value of the particular item or service the consumer has indicated an interest in.
Similarly geographic information may also cause an increase or decrease in score. For example if a consumer is looking for a property in a particular area of interest e.g. an area close to the product or service provider an area near the area in which the consumer currently resides or an area known to have high sales a score may be increased but a score could be decreased if the area of interest for the consumer is outside of where the product or service provider is licensed to provide services. Geographic areas of interest to the consumer and the geographic areas of interest to the product or service provider can both be used together or individually and can be specified using a number of techniques such as but not limited to zip codes GPS IP address points of interest and additional means.
Additionally meta data maintained by the system about a consumer may also be used in page ranking. For example a first user that has updated his her profile more recently than a second user who otherwise has the same characteristics as the first user may be significantly more valuable to a product or service provider than the second user as the second user may not be an active consumer. Similarly the number of times that a consumer updates his her profile can be used as part of page ranking as well. The system can likewise track and use in page rank the number of times that a consumer has purchased after being contacted the number of times that the consumer has bought similar goods the average price of purchases by the consumer and or the number of purchases per time interval by the consumer.
Additionally meta data maintained by the system about the product or service provider may also be used in page ranking. For example the number of times that the product or service provider selected similar profiles and or the amount of time that the product or service provider previously spent reviewing similar consumer profiles may be used to affect page ranking.
The above discussion has focused on the use of universal static ranking where all service providers receive the same search results if they search on the same criteria. However it is also possible to utilize the above system in the context of an auction or by using bid pricing. For example if a first service provider indicates that it will pay 1 per lead then the first service provider may receive a different and better set of search results than a second service provider that indicates that it is only willing to pay 0.50 per lead. In such a case the second service provider may get delayed leads where the lead is only placed on the second service provider s results after the lead is older than a particular time period e.g. half an hour hour day or week . Alternatively the second service provider may get lower scoring leads than the first service provider that has offered to pay more per lead.
The cost per lead also need not be static. As the pool of leads begins to shrink the cost of each lead may be increased. Similarly as the pool of leads increases the cost per lead may decrease in order to have more service providers matched to consumers. The cost per lead may also be affected by volume discounts such that a service provider that has purchased a threshold number of leads begins to get a discount on future purchases for a particular period of time. Additionally price or position may be influenced by specificity of data.
Profile may be included and be listed in combination with organic search and advertising results on an existing web platform and or search engine.
Additionally the most preferred iteration of the invention may use the consumers ideal price point or price range most recent time of request and geographic area as the most heavily weighted variables in the ranking of results. These variables may be identified and presented as part of the profile prior to the product or service provider selecting and accessing the consumer s profile.
The system of the disclosed embodiments may also act as an intermediary for all or some of the communication of information between the consumers who create profiles and product or service providers who access the profiles. This may include correspondence between the parties or the exchange of funds.
The system of the disclosed embodiments may allow consumers to search for products and speak with knowledgeable professionals about what they are looking for without divulging personal information that may be sold or passed on to other entities without their direct knowledge.
The product or service provider may contact the consumer via a protected email protected phone traditional email text message telephone post mail and or whichever means of communications the consumer or the system has indicated or determined to be of benefit to the product or service provider. In addition the contact with the consumer may be in the form of a posting to a password protected message board or web site e.g. Facebook or LinkedIn . For example a protected email can be a temporary email address at the intermediary system e.g. x13 kdsf35 systemsdomainname.domainextension which the system then forwards to a customer s real email address MeganMatthew customersdomainname.domainextension where domainextension typically is .com .edu etc. Similarly a protected phone number is a phone number of the system that the product or service provider calls and then the system bridges the call to the customer s real phone number. A system may have multiple phone numbers that it associates with certain leads for a period of time or the system may have one or more central numbers that once called use PINs and or system assigned customer numbers that identify which of the leads the product or service provider wishes to contact. Because the system acts as an intermediary the number of contacts to a potential customer can be limited by the system e.g. three contacts can be made before the system stops forwarding email or calls . In one embodiment the system can contact the customer to determine if the customer wishes to authorize an additional number of contacts from the product or service provider such that the product or service provider can continue to send the customer information and or offers.
In addition to a traditional revenue model where the intermediary facilitating the communication between the two parties collects and retains the revenue the intermediary site may also perform a revenue split with the consumer using the service. This process may be based on a direct per transaction method based on typical affiliate programs or on a dividend method where each member is paid some portion of global payout. This payout may also be in the form of credit or points which could be exchanged for goods services or cash. These payouts may also be in the form of gift cards for example.
The system for collecting organizing storing and returning lists of profiles to be accessed or delivering information stored with in a profile to a product or service provider when accessed may be stored on a computer on a network and may be accessed by multiple information sites.
In one possible disclosed embodiment the system may allow for multiple devices and systems all containing their own methods collectively platforms to connect with the system. These platforms would be able to display submit create modify extract and monetize profiles within or somehow associated with the system.
Examples of platforms that may access the system include a website related to a particular subject matter internet search engines software applications and mobile devices. The combination of organic search results from an internet search engine combined with a list of consumer profiles may be offered as an option on a search engine or website if that search engine or website is able to determine if the user is performing a search as a product or service provider or a consumer with an account on the system.
These platforms may access the system via Really Simple Syndication RSS feeds software Application Programming Interfaces APIs custom code connected to the system and designed to present information on a website such as a Widget or some other code designed to allow other platforms to interface with the system.
The system may determine how each profile is relevant to a search request based on search criteria program code with thin the system valuation of the profile number of search terms a geographic area and amount product or service provider is willing to pay for access to all of part of the information contained in the profile information contained within a profile or group of profiles other criteria determined by the system platform accessing the system or any combination.
The system may based on user interaction or use of program code add additional information to a consumer profile based on available information from other data source or providers to further enhance the value of the profile to a product or service provider or further refine a consumers request for a particular product. This may include listing details about previous purchases made or services requested by that consumer or other product or service providers that this consumer frequents.
The system may employ reasonable measures for fraud protection and abuse. The system may maintain consumer s information and act as an intermediary for all communications between consumer and product or service provider. Conversely accounts created by consumers may require some form of verification to ensure that are not charged for false information provided by a consumer or unauthorized or accidentally access of profiles by their account. Additionally the implementation of PPC to access contact the product or service buyers consumers would potentially reduce spam and increase the value of the intermediary service to both the consumer and the professional. The consumer can block or unblock a particular product or service provider so that the consumer is not bothered by product or service providers they do not want to do business with and this would also keep the product or service provider from selecting the same consumer that does not want to be bothered with unwelcome solicitations.
An example of this process may be found in the real estate profession where a consumer s profile has been added to a database which indicates that the consumer is searching for a 500 000 home with 4 bedrooms and 3 baths in Scottsdale Ariz. The consumer request was made at 9 00 am Monday morning.
A loan officer who desires to generate business in that price range performs a search on an internet search engine for consumers who are looking in the 500 000 price range and would be presented with the consumer s profile to be accessed.
A different loan officer who desires to generate business in the city of Scottsdale Ariz. performs a search on a real estate website for consumers who are searching for homes in the city of Scottsdale. That loan officer is presented with the same consumer s profile.
A third loan officer who desires to generate business in the city of Tempe Ariz. and desires consumer s in Tempe Ariz. performs a search in on his mobile devise and is offered the same consumer from Scottsdale profile because the system determined the target city of the search is relatively close to the one listed in the profile or because they are both within Maricopa county or because it was determined that consumers who shop in Scottsdale are likely to desirable consumers for loan officers in Tempe. There are many other cross over data points including time of request geographically and relevance only a few of which were discussed here in this example.
A further example of this process in the real estate profession may be illustrated where a website may exist that may ask members to signup for an account and provide Non Personal Information such as details about the type of house they are looking for e.g. bed bath count type of house location price range and personal information such as contact information e.g. name email address phone number etc. The non personal information such as the details about the type of house they are looking for may then be posted in a list with other members on the website. Real estate agents who have also signed up as professional members of the site may then select one of the members from the list and initiate contact through the site. The member may receive a request from the professional member at which time they would be able to see all the details about the person initiating the request and can then accept reject or report the contact as spam . If a request is accepted the member may then discuss further details about the property with the professional member and if they choose directly communicate any personal contact information.
A further example of this process can be illustrated with an example of a consumer searching for a designer handbag. The consumer is interested in a particular handbag from a particular manufacture and creates a profile with criteria describing the desired bag and excludes the profile from being seen or accessed by any product or service provider that doesn t offer handbags from that specific manufacturer.
A product or service provider who specializes in handbags from that particular manufacturer may contact the first consumer and sell them a handbag. After the sale the consumer may remove their profile as their needs were met. The product or service provider may have the option to keep her information in a database in order to send consumer information to the consumer as new handbags from that same manufacturer become available in an attempt to secure a future sale.
In the same scenario if a second consumer who is interested in all designer handbags if they are offered at a reasonable price may create a profile describing the desire for all designer handbags a reasonable price but she may limit the number of contact to 10 per week for example.
The same product or service provider who contacted the first consumer may also access this profile. Additionally several other product or service providers may select the profile and each may have the option to present their best deal e.g. no shipping costs reduced price and so forth . In this case the consumer may leave her profile active as she is willing to buy at least one designer bag per month so long as she perceives the price to be a bargain for example. Each of these product or service providers is now working to gain the business of the consumer and the consumer is able to get the best deal on each of her purchases without performing hours of product or service searches.
It is also possible that in this scenario product or service provider may per purchase or queue up to access this consumer if the allotted number of contacts for the product or service provider has been reached for a particular time frame.
The consumers may represent a single person an entity a group of persons or a group of entities for example. In this manner a group of persons and or a group of entities may have greater bargaining power when dealing with a particular or groups of particular product or service providers . The product or service provider may be any provider e.g. store manufacturer distributor company corporation entrepreneur agency service etc. of products and or services that may be able to provide products or services to one or more consumers . The product or service provider may be local to the geographic area of the consumer in a different geographic area to the consumer or be a national or international product or service provider for example.
The consumer product or service provider profile database may be coupled to the consumer product or service provider matching server and may store a plurality of profiles for consumers and product or service providers . The database may be accessible and or searchable by the consumer product or service provider matching server the consumers and or the product or service provider for example. The consumer product or service provider matching server may be any device having a processor and a memory that is able to communicate to and from consumers and product or service providers such as a server a computer or a processing device for example.
Processor may include at least one conventional processor or microprocessor that interprets and executes instructions. Memory may be a random access memory RAM or another type of dynamic storage device that stores information and instructions for execution by flight planning processor . Memory may also store temporary variables or other intermediate information used during execution of instructions by the processor . ROM may include a conventional ROM device or another type of static storage device that stores static information and instructions for the processor .
Input devices may include one or more conventional mechanisms that permit a user to input information to the consumer product or service provider matching server such as a keyboard a mouse a pen a voice recognition device etc. Output devices may include one or more conventional mechanisms that output information to the user including a display a printer one or more speakers or a medium such as a memory or a magnetic or optical disk and a corresponding disk drive. Communication interface may include any transceiver like mechanism that enables the consumer product or service provider matching server to communicate via a network. For example communication interface may include a modem or an Ethernet interface for communicating via a local area network LAN . Alternatively communication interface may include other mechanisms for communicating with other devices and or systems via wired wireless or optical connections. In some implementations of the consumer product or service provider matching server communication interface may not be included in the exemplary consumer product or service provider matching server when the consumer product or service provider matching process is implemented completely within the consumer product or service provider matching environment .
The consumer product or service provider matching server may perform such functions in response to the processor by executing sequences of instructions contained in a computer readable medium such as for example the memory a magnetic disk or an optical disk. Such instructions may be read into the memory from another computer readable medium such as a separate storage device or from a separate device via communication interface .
The consumer product or service provider matching server illustrated in and the related discussion are intended to provide a brief general description of a suitable computing environment in which the disclosed embodiments may be implemented. Although not required the disclosed embodiments will be described at least in part in the general context of computer executable instructions such as program modules being executed by the consumer product or service provider matching server such as a general purpose computer. Generally program modules include routine programs objects components data structures etc. that perform particular tasks or implement particular abstract data types. Moreover those skilled in the art will appreciate that other embodiments of the disclosed embodiments may be practiced in network computing environments with many types of computer system configurations including personal computers hand held devices multi processor systems microprocessor based or programmable consumer electronics network PCs minicomputers mainframe computers and the like.
Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked either by hardwired links wireless links or by a combination thereof through a communications network. In a distributed computing environment program modules may be located in both local and remote memory storage devices.
The operation of the consumer product or service provider matching unit and consumer product or service provider matching process will be described below in relation to the diagrams shown in .
At step the consumer product or service provider matching unit may receive one or more profiles from one or more consumers . The customer s profile may include contact information including name address telephone numbers and email and website address history of type of products or services needed geographic area lived in or geographic area of interest payment terms account information etc. for example . At step the consumer product or service provider matching unit may receive a consumer s subject matter request. The subject matter request may be a request for products or services an inquiry concerning a geographic area a subject matter search etc. The consumer s profile and the consumer s subject matter request may be received by the consumer product or service provider matching unit through a website for example.
At step the consumer product or service provider matching unit may match the consumer with one or more products or service providers based on at least one of the consumer s profile and subject matter request. At step the consumer product or service provider matching unit may provide the consumer s contact information to the matched one or more products or service providers . The consumer s contact information may contain a telephone number an address an email address a hyperlink to the consumer s email address information inferred by the system or additional consumer profiles for example. The consumer s profile may contain the contact information that is to be provided to the one or more product or service providers for example.
The one or more product or service provider may be charged each time it is provided access to one of the one or more consumer s profiles. In addition a particular product or service provider may be charged each time a consumer accesses contact information contained in advertising information of the one particular product or service provider . The process may then go to step and end.
During the above process the consumers and the one or more product or service providers may be connected to the consumer product or service provider matching unit through the communication interface through the use of at least one of e mail a mobile communications network telephone television electronic communication device and the internet for example.
Note that the consumer product or service provider matching unit may respond to the consumer s subject matter request and the consumer product or service provider matching unit may present the consumer with advertising information for example.
The consumer product or service provider matching unit may update a consumer s profile based on past purchases of products or services subject matter requests etc. for example. The consumer product or service provider matching unit may send at least one of coupons and discounts to the consumer based on at least one of the consumer s profile the consumer s current or past product or services purchases or the consumer s current or past subject matter requests for example.
The consumer product or service provider matching unit may derive a geographic area of interest based on the consumer s subject matter request network address and or the consumer s profile for example. The consumer product or service provider matches unit may then match the consumer with one or more products or service providers based on the consumer s profile the subject matter request and or the derived geographic area.
While the above discussion has been made with respect to an assumed system wide page ranking the system can also be supplemented with a page ranking system that can be specific to a product or service provider. For example a product or service provider may not care about whether a street address is provided and may therefore wish to not reduce a score based on those criteria. However a product or service provider may want to more strongly rank potential buyers in a particular geographic area e.g. zip code 20001 causes a 10 pt. increase in score or state HI causes a 10 pt. decrease in score .
Furthermore the system may be configured to override certain scoring factors. For example a company that specializes in credit repair may wish to provide a higher score to Poor credit than Excellent credit. Thus a product or service provider may be able to better customize search results as part of their particular product or service provider profile.
An interface such as is shown in may additionally be supplemented such that the most active product or service providers receive advertising space on the websites on which they are purchasing access to consumer profiles. The advertising space may be in the form of banners or advertisements generally that are or include hyperlinks to information regarding the product or service provider or may be specific to geographic regions in which the product or service provider is purchasing access to consumer profiles.
As discussed with respect to delayed leads above some product or service providers may be willing to pay for more timely access to profiles. Profiles can therefore be offered on a right of first refusal basis. For example a first service provider that bids the highest amount may be the first service provider to receive notice of all new or updated consumer profiles that match a specified criteria. The first service provider may then have a set period of time in which to either claim the consumer profile pass on the consumer profile thereby not exercising the right of first refusal or hold the consumer profile either at no cost or at a smaller cost than claiming the profile .
After the first product or service provider has passed on a consumer profile or after a hold has been released or expired the right to access the consumer profile information would then pass to a second product or service provider or to a second tier of product or service providers . The process could then repeat for as many levels as exist within the system. The hierarchy can be based on bids historical buying patterns profiles and or geographic information for example. The offering of the consumer profiles can be by email posting to a website text message etc. such that product or service providers are notified of when new consumer profiles are available and by when they must respond
Embodiments within the scope of the present disclosed embodiments may also include computer readable media for carrying or having computer executable instructions or data structures stored thereon. Such computer readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example and not limitation such computer readable media can comprise RAM ROM EEPROM CD ROM or other optical disk storage magnetic disk storage or other magnetic storage devices or any other medium which can be used to carry or store desired program code means in the form of computer executable instructions or data structures. When information is transferred or provided over a network or another communications connection either hardwired wireless or combination thereof to a computer the computer properly views the connection as a computer readable medium. Thus any such connection is properly termed a computer readable medium. Combinations of the above should also be included within the scope of the computer readable media.
Computer executable instructions include for example instructions and data which cause a general purpose computer special purpose computer or special purpose processing device to perform a certain function or group of functions. Computer executable instructions also include program modules that are executed by computers in stand alone or network environments. Generally program modules include routines programs objects components and data structures etc. that performs particular tasks or implement particular abstract data types. Computer executable instructions associated data structures and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
Although the above description may contain specific details they should not be construed as limiting the claims in any way. Other configurations of the described embodiments of the disclosed embodiments are part of the scope of the disclosed embodiments. For example the principles of the disclosed embodiments may be applied to each individual user where each user may individually deploy such a system. This enables each user to utilize the benefits of the disclosed embodiments even if any one of the large number of possible applications do not need the functionality described herein. In other words there may be multiple instances of the disclosed system each processing the content in various possible ways. It does not necessarily need to be one system used by all end users. Accordingly the appended claims and their legal equivalents should only define the disclosed embodiments rather than any specific examples given.
| 269.880208 | 1,716 | 0.826736 | eng_Latn | 0.999982 |
f47d6aa5ee5e5f6a7af64e076f40f47c87ad5fbd | 3,846 | md | Markdown | docs/home/howto/LOGGING.md | atsikham/epiphany | d06ee99f6b72fb04e1e1a3e0aab2d86f9775f374 | [
"Apache-2.0"
] | 1 | 2021-02-04T07:40:01.000Z | 2021-02-04T07:40:01.000Z | docs/home/howto/LOGGING.md | atsikham/epiphany | d06ee99f6b72fb04e1e1a3e0aab2d86f9775f374 | [
"Apache-2.0"
] | 1 | 2020-06-22T17:32:44.000Z | 2020-06-22T17:32:44.000Z | docs/home/howto/LOGGING.md | atsikham/epiphany | d06ee99f6b72fb04e1e1a3e0aab2d86f9775f374 | [
"Apache-2.0"
] | null | null | null | # Centralized logging setup
For centralized logging Epiphany uses [OpenDistro for Elasticsearch](https://opendistro.github.io/for-elasticsearch/).
In order to enable centralized logging, be sure that `count` property for `logging` feature is greater than 0 in your configuration manifest.
```yaml
kind: epiphany-cluster
...
specification:
...
components:
kubernetes_master:
count: 1
kubernetes_node:
count: 0
...
logging:
count: 1
...
```
## Default feature mapping for logging
```yaml
...
logging:
- logging
- kibana
- node-exporter
- filebeat
- firewall
...
```
>Optional feature (role) available for logging: **logstash**
>more details here: [link](https://github.com/epiphany-platform/epiphany/blob/develop/docs/home/howto/LOGGING.md#how-to-export-elasticsearch-data-to-csv-format)
The `logging` role replaced `elasticsearch` role. This change was done to enable Elasticsearch usage also for data storage - not only for logs as it was till 0.5.0.
Default configuration of `logging` and `opendistro_for_elasticsearch` roles is identical (./DATABASES.md#how-to-start-working-with-opendistro-for-elasticsearch). To modify configuration of centralized logging adjust and use the following defaults in your manifest:
```yaml
kind: configuration/logging
title: Logging Config
name: default
specification:
cluster_name: EpiphanyElastic
clustered: True
paths:
data: /var/lib/elasticsearch
repo: /var/lib/elasticsearch-snapshots
logs: /var/log/elasticsearch
```
## How to export Elasticsearch data to csv format
Since v0.8 Epiphany provide posibility to export data from Elasticsearch to CSV using Logstash *(logstash-oss v7.8.1*) along with *logstash-input-elasticsearch (v4.6.2)* and *logstash-output-csv (v3.0.8)* plugin.
To install Logstash in your cluster add **logstash** to feature mapping for *logging, opendistro_for_elasticsearch* or *elasticsearch* group.
Epiphany provides a basic configuration file `(logstash-export.conf.template)` as template for your data export.
This file has to be modified according to your Elasticsearch configuration and data you want to export.
`Note: Exporting data is not automated. It has to be invoked manually. Logstash daemon is disabled by default after installation.`
Run Logstash to export data:
`/usr/share/logstash/bin/logstash -f /etc/logstash/logstash-export.conf`
More details about configuration of input plugin:
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html
More details about configuration of output plugin:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-csv.html
Note: Currently input plugin doesn't officialy support skipping certificate validation for secure connection to Elasticsearch.
For non-production environment you can easly disable it by adding new line:
`ssl_options[:verify] = false` right after other ssl_options definitions in file:
`/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.6.2/lib/logstash/inputs/elasticsearch.rb`
## How to add multiline support for Filebeat logs
In order to properly handle multilines in files harvested by Filebeat you have to provide `multiline` definition in the configuration manifest. Using the following code you will be able to specify which lines are part of a single event.
By default postgresql block is provided, you can use it as example:
```yaml
postgresql_input:
multiline:
pattern: >-
'^\d{4}-\d{2}-\d{2} '
negate: true
match: after
```
Currently supported inputs: `common_input`,`postgresql_input`,`container_input`
More details about multiline options you can find in the [official documentation](https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html)
| 40.484211 | 264 | 0.75897 | eng_Latn | 0.915244 |
f47d736caa67eae013154452e1727d90c3ca94ba | 5,655 | md | Markdown | packages/@ngx-config/http-loader/README.md | NarHakobyan/ngx-config | 28610fc4b5aa0cba66030e11af67326a40beaaaa | [
"MIT"
] | null | null | null | packages/@ngx-config/http-loader/README.md | NarHakobyan/ngx-config | 28610fc4b5aa0cba66030e11af67326a40beaaaa | [
"MIT"
] | null | null | null | packages/@ngx-config/http-loader/README.md | NarHakobyan/ngx-config | 28610fc4b5aa0cba66030e11af67326a40beaaaa | [
"MIT"
] | null | null | null | # @ngx-config/http-loader [![npm version](https://badge.fury.io/js/%40ngx-config%2Fhttp-loader.svg)](https://www.npmjs.com/package/@ngx-config/http-loader) [![npm downloads](https://img.shields.io/npm/dm/%40ngx-config%2Fhttp-loader.svg)](https://www.npmjs.com/package/@ngx-config/http-loader)
Loader for [ngx-config] that provides application settings using **`http`**
[![CircleCI](https://circleci.com/gh/fulls1z3/ngx-config.svg?style=shield)](https://circleci.com/gh/fulls1z3/ngx-config)
[![coverage](https://codecov.io/github/fulls1z3/ngx-config/coverage.svg?branch=master)](https://codecov.io/gh/fulls1z3/ngx-config)
[![tested with jest](https://img.shields.io/badge/tested_with-jest-99424f.svg)](https://github.com/facebook/jest)
[![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org)
[![Angular Style Guide](https://mgechev.github.io/angular2-style-guide/images/badge.svg)](https://angular.io/styleguide)
> Please support this project by simply putting a Github star. Share this library with friends on Twitter and everywhere else you can.
#### NOTICE
> This *[6.x.x] branch* is intented to work with `Angular v6.x.x`. If you're developing on a later release of **Angular**
than `v6.x.x`, then you should probably choose the appropriate version of this library by visiting the *[master] branch*.
## Table of contents:
- [Prerequisites](#prerequisites)
- [Getting started](#getting-started)
- [Installation](#installation)
- [Examples](#examples)
- [Related packages](#related-packages)
- [Adding `@ngx-config/http-loader` to your project (SystemJS)](#adding-systemjs)
- [Settings](#settings)
- [Setting up `ConfigModule` to use `ConfigHttpLoader`](#setting-up-httploader)
- [License](#license)
## <a name="prerequisites"></a> Prerequisites
This library depends on `Angular v6.0.0`. Older versions contain outdated dependencies, might produce errors.
Also, please ensure that you are using **`Typescript v2.7.2`** or higher.
## <a name="getting-started"> Getting started
### <a name="installation"> Installation
You can install **`@ngx-config/http-loader`** using `npm`
```
npm install @ngx-config/http-loader --save
```
**Note**: You should have already installed [@ngx-config/core].
### <a name="examples"></a> Examples
- [ng-seed/universal] and [fulls1z3/example-app] are officially maintained projects, showcasing common patterns and best
practices for **`@ngx-config/http-loader`**.
### <a name="related-packages"></a> Related packages
The following packages may be used in conjunction with **`@ngx-config/http-loader`**:
- [@ngx-config/core]
- [@ngx-config/merge-loader]
### <a name="adding-systemjs"></a> Adding `@ngx-config/http-loader` to your project (SystemJS)
Add `map` for **`@ngx-config/http-loader`** in your `systemjs.config`
```javascript
'@ngx-config/http-loader': 'node_modules/@ngx-config/http-loader/bundles/http-loader.umd.min.js'
```
## <a name="settings"></a> Settings
### <a name="setting-up-httploader"></a> Setting up `ConfigModule` to use `ConfigHttpLoader`
If you provide application settings using a `JSON` file or an `API`, you can call the [forRoot] static method using the
`ConfigHttpLoader`. By default, it is configured to retrieve **application settings** from the endpoint `/config.json`
(*if not specified*).
> You can customize this behavior (*and ofc other settings*) by supplying a **api endpoint** to `ConfigHttpLoader`.
- Import `ConfigModule` using the mapping `'@ngx-config/core'` and append `ConfigModule.forRoot({...})` within the imports
property of **app.module**.
- Import `ConfigHttpLoader` using the mapping `'@ngx-config/http-loader'`.
**Note**: *Considering the app.module is the core module in Angular application*.
#### config.json
```json
{
"system": {
"applicationName": "Mighty Mouse",
"applicationUrl": "http://localhost:8000"
},
"seo": {
"pageTitle": "Tweeting bird"
},
"i18n":{
"locale": "en"
}
}
```
#### app.module.ts
```TypeScript
...
import { HttpClient } from '@angular/common/http';
import { ConfigModule, ConfigLoader, } from '@ngx-config/core';
import { ConfigHttpLoader } from '@ngx-config/http-loader';
...
export function configFactory(http: HttpClient): ConfigLoader {
return new ConfigHttpLoader(http, './config.json'); // API ENDPOINT
}
@NgModule({
declarations: [
AppComponent,
...
],
...
imports: [
ConfigModule.forRoot({
provide: ConfigLoader,
useFactory: (configFactory),
deps: [HttpClient]
}),
...
],
...
bootstrap: [AppComponent]
})
```
`ConfigHttpLoader` has two parameters:
- **http**: `HttpClient` : Http instance
- **endpoint**: `string` : the `API endpoint`, to retrieve application settings from (*by default, `config.json`*)
> :+1: Well! **`@ngx-config/http-loader`** will now provide **application settings** to [@ngx-config/core] using `http`.
## <a name="license"></a> License
The MIT License (MIT)
Copyright (c) 2018 [Burak Tasci]
[master]: https://github.com/fulls1z3/ngx-config/core/tree/master
[6.x.x]: https://github.com/fulls1z3/ngx-config/core/tree/6.x.x
[ngx-config]: https://github.com/fulls1z3/ngx-config
[ng-seed/universal]: https://github.com/ng-seed/universal
[fulls1z3/example-app]: https://github.com/fulls1z3/example-app
[@ngx-config/core]: https://github.com/fulls1z3/ngx-config/tree/master/packages/@ngx-config/core
[@ngx-config/merge-loader]: https://github.com/fulls1z3/ngx-config/tree/master/packages/@ngx-config/merge-loader
[forRoot]: https://angular.io/docs/ts/latest/guide/ngmodule.html#!#core-for-root
[Burak Tasci]: https://github.com/fulls1z3
| 41.277372 | 292 | 0.714412 | eng_Latn | 0.465228 |
f47d96d2c4951e296cd7e288e14c6736049d5dd4 | 9,700 | md | Markdown | docs/t-sql/functions/odbc-scalar-functions-transact-sql.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-04T18:16:27.000Z | 2021-03-04T18:16:27.000Z | docs/t-sql/functions/odbc-scalar-functions-transact-sql.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/functions/odbc-scalar-functions-transact-sql.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-22T19:33:16.000Z | 2020-04-22T19:33:16.000Z | ---
title: "ODBC Scalar Functions (Transact-SQL) | Microsoft Docs"
ms.custom: ""
ms.date: "03/15/2017"
ms.prod: "sql-non-specified"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "database-engine"
ms.tgt_pltfrm: ""
ms.topic: "language-reference"
dev_langs:
- "TSQL"
helpviewer_keywords:
- "CURDATE ODBC function"
- "DAYOFMONTH ODBC function"
- "WEEK ODBC function"
- "functions, ODBC CONCAT"
- "SECOND ODBC function"
- "functions, ODBC CURRENT_TIME"
- "functions, ODBC HOUR"
- "functions, ODBC QUARTER"
- "TRUNCATE ODBC function"
- "functions, ODBC SECOND"
- "DAYOFWEEK ODBC function"
- "CURRENT_DATE ODBC function"
- "DAYNAME ODBC function"
- "CURTIME ODBC function"
- "functions, ODBC CURRENT_DATE"
- "MINUTE ODBC function"
- "functions, ODBC BIT_LENGTH"
- "functions, ODBC OCTET_LENGTH"
- "CONCAT ODBC function"
- "MONTHNAME ODBC function"
- "functions, ODBC DAYOFYEAR"
- "OCTET_LENGTH ODBC function"
- "functions [SQL Server], ODBC scalar functions"
- "BIT_LENGTH ODBC function"
- "functions, ODBC DAYOFMONTH"
- "CURRENT_TIME ODBC function"
- "functions, ODBC CURDATE"
- "functions, ODBC CURTIME"
- "functions, ODBC TRUNCATE"
- "functions, ODBC MONTHNAME"
- "functions, ODBC DAYNAME"
- "ODBC scalar functions"
- "functions, ODBC MINUTE"
- "functions, ODBC DAYOFWEEK"
- "QUARTER ODBC function"
- "DAYOFYEAR ODBC function"
- "functions, ODBC WEEK"
- "HOUR ODBC function"
ms.assetid: a0df1ac2-6699-4ac0-8f79-f362f23496f1
caps.latest.revision: 17
author: "BYHAM"
ms.author: "rickbyh"
manager: "jhubbard"
---
# ODBC Scalar Functions (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-all_md](../../includes/tsql-appliesto-ss2008-all-md.md)]
You can use [ODBC Scalar Functions](http://go.microsoft.com/fwlink/?LinkID=88579) in [!INCLUDE[tsql](../../includes/tsql-md.md)] statements. These statements are interpreted by [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. They can be used in stored procedures and user-defined functions. These include string, numeric, time, date, interval, and system functions.
## Usage
`SELECT {fn <function_name> [ (<argument>,....n) ] }`
## Functions
The following tables list ODBC scalar functions that are not duplicated in [!INCLUDE[tsql](../../includes/tsql-md.md)].
### String Functions
|Function|Description|
|--------------|-----------------|
|BIT_LENGTH( string_exp ) (ODBC 3.0)|Returns the length in bits of the string expression.<br /><br /> Does not work only for string data types. Therefore, will not implicitly convert string_exp to string but instead will return the (internal) size of whatever data type it is given.|
|CONCAT( string_exp1,string_exp2) (ODBC 1.0)|Returns a character string that is the result of concatenating string_exp2 to string_exp1. The resulting string is DBMS-dependent. For example, if the column represented by string_exp1 contained a NULL value, DB2 would return NULL but [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] would return the non-NULL string.|
|OCTET_LENGTH( string_exp ) (ODBC 3.0)|Returns the length in bytes of the string expression. The result is the smallest integer not less than the number of bits divided by 8.<br /><br /> Does not work only for string data types. Therefore, will not implicitly convert string_exp to string but instead will return the (internal) size of whatever data type it is given.|
### Numeric Function
|Function|Description|
|--------------|-----------------|
|TRUNCATE( numeric_exp, integer_exp) (ODBC 2.0)|Returns numeric_exp truncated to integer_exp positions right of the decimal point. If integer_exp is negative, numeric_exp is truncated to |integer_exp| positions to the left of the decimal point.|
### Time, Date, and Interval Functions
|Function|Description|
|--------------|-----------------|
|CURRENT_DATE( ) (ODBC 3.0)|Returns the current date.|
|CURDATE( ) (ODBC 3.0)|Returns the current date.|
|CURRENT_TIME`[( time-precision )]` (ODBC 3.0)|Returns the current local time. The time-precision argument determines the seconds precision of the returned value|
|CURTIME() (ODBC 3.0)|Returns the current local time.|
|DAYNAME( date_exp ) (ODBC 2.0)|Returns a character string that contains the data source–specific name of the day (for example, Sunday through Saturday or Sun. through Sat. for a data source that uses English, or Sonntag through Samstag for a data source that uses German) for the day part of date_exp.|
|DAYOFMONTH( date_exp ) (ODBC 1.0)|Returns the day of the month based on the month field in date_exp as an integer value in the range of 1–31.|
|DAYOFWEEK( date_exp ) (ODBC 1.0)|Returns the day of the week based on the week field in date_exp as an integer value in the range of 1–7, where 1 represents Sunday.|
|HOUR( time_exp ) (ODBC 1.0)|Returns the hour based on the hour field in time_exp as an integer value in the range of 0–23.|
|MINUTE( time_exp ) (ODBC 1.0)|Returns the minute based on the minute field in time_exp as an integer value in the range of 0–59.|
|SECOND( time_exp ) (ODBC 1.0)|Returns the second based on the second field in time_exp as an integer value in the range of 0–59.|
|MONTHNAME( date_exp ) (ODBC 2.0)|Returns a character string that contains the data source–specific name of the month (for example, January through December or Jan. through Dec. for a data source that uses English, or Januar through Dezember for a data source that uses German) for the month part of date_exp.|
|QUARTER( date_exp ) (ODBC 1.0)|Returns the quarter in date_exp as an integer value in the range of 1–4, where 1 represents January 1 through March 31.|
|WEEK( date_exp ) (ODBC 1.0)|Returns the week of the year based on the week field in date_exp as an integer value in the range of 1–53.|
## Examples
### A. Using an ODBC function in a stored procedure
The following example uses an ODBC function in a stored procedure:
```
CREATE PROCEDURE dbo.ODBCprocedure
(
@string_exp nvarchar(4000)
)
AS
SELECT {fn OCTET_LENGTH( @string_exp )};
```
### B. Using an ODBC Function in a user-defined function
The following example uses an ODBC function in a user-defined function:
```
CREATE FUNCTION dbo.ODBCudf
(
@string_exp nvarchar(4000)
)
RETURNS int
AS
BEGIN
DECLARE @len int
SET @len = (SELECT {fn OCTET_LENGTH( @string_exp )})
RETURN(@len)
END ;
SELECT dbo.ODBCudf('Returns the length.');
--Returns 38
```
### C. Using an ODBC functions in SELECT statements
The following SELECT statements use ODBC functions:
```
DECLARE @string_exp nvarchar(4000) = 'Returns the length.';
SELECT {fn BIT_LENGTH( @string_exp )};
-- Returns 304
SELECT {fn OCTET_LENGTH( @string_exp )};
-- Returns 38
SELECT {fn CONCAT( 'CONCAT ','returns a character string')};
-- Returns CONCAT returns a character string
SELECT {fn TRUNCATE( 100.123456, 4)};
-- Returns 100.123400
SELECT {fn CURRENT_DATE( )};
-- Returns 2007-04-20
SELECT {fn CURRENT_TIME(6)};
-- Returns 10:27:11.973000
DECLARE @date_exp nvarchar(30) = '2007-04-21 01:01:01.1234567';
SELECT {fn DAYNAME( @date_exp )};
-- Returns Saturday
SELECT {fn DAYOFMONTH( @date_exp )};
-- Returns 21
SELECT {fn DAYOFWEEK( @date_exp )};
-- Returns 7
SELECT {fn HOUR( @date_exp)};
-- Returns 1
SELECT {fn MINUTE( @date_exp )};
-- Returns 1
SELECT {fn SECOND( @date_exp )};
-- Returns 1
SELECT {fn MONTHNAME( @date_exp )};
-- Returns April
SELECT {fn QUARTER( @date_exp )};
-- Returns 2
SELECT {fn WEEK( @date_exp )};
-- Returns 16
```
## Examples: [!INCLUDE[ssSDWfull](../../includes/sssdwfull-md.md)] and [!INCLUDE[ssPDW](../../includes/sspdw-md.md)]
### D. Using an ODBC function in a stored procedure
The following example uses an ODBC function in a stored procedure:
```
CREATE PROCEDURE dbo.ODBCprocedure
(
@string_exp nvarchar(4000)
)
AS
SELECT {fn BIT_LENGTH( @string_exp )};
```
### E. Using an ODBC Function in a user-defined function
The following example uses an ODBC function in a user-defined function:
```
CREATE FUNCTION dbo.ODBCudf
(
@string_exp nvarchar(4000)
)
RETURNS int
AS
BEGIN
DECLARE @len int
SET @len = (SELECT {fn BIT_LENGTH( @string_exp )})
RETURN(@len)
END ;
SELECT dbo.ODBCudf('Returns the length in bits.');
--Returns 432
```
### F. Using an ODBC functions in SELECT statements
The following SELECT statements use ODBC functions:
```
DECLARE @string_exp nvarchar(4000) = 'Returns the length.';
SELECT {fn BIT_LENGTH( @string_exp )};
-- Returns 304
SELECT {fn CONCAT( 'CONCAT ','returns a character string')};
-- Returns CONCAT returns a character string
SELECT {fn CURRENT_DATE( )};
-- Returns todays date
SELECT {fn CURRENT_TIME(6)};
-- Returns the time
DECLARE @date_exp nvarchar(30) = '2007-04-21 01:01:01.1234567';
SELECT {fn DAYNAME( @date_exp )};
-- Returns Saturday
SELECT {fn DAYOFMONTH( @date_exp )};
-- Returns 21
SELECT {fn DAYOFWEEK( @date_exp )};
-- Returns 7
SELECT {fn HOUR( @date_exp)};
-- Returns 1
SELECT {fn MINUTE( @date_exp )};
-- Returns 1
SELECT {fn SECOND( @date_exp )};
-- Returns 1
SELECT {fn MONTHNAME( @date_exp )};
-- Returns April
SELECT {fn QUARTER( @date_exp )};
-- Returns 2
SELECT {fn WEEK( @date_exp )};
-- Returns 16
```
## See Also
[Built-in Functions (Transact-SQL)](~/t-sql/functions/functions.md)
| 38.039216 | 380 | 0.683814 | eng_Latn | 0.427786 |
f47e78abca0d09c23218d3011e27f88e04d7653e | 8,745 | md | Markdown | wings/1.0/installing.md | natsuki-development-studio/documentation | 675d634fac5c0a37d441012c66ef11da020e31df | [
"MIT"
] | null | null | null | wings/1.0/installing.md | natsuki-development-studio/documentation | 675d634fac5c0a37d441012c66ef11da020e31df | [
"MIT"
] | null | null | null | wings/1.0/installing.md | natsuki-development-studio/documentation | 675d634fac5c0a37d441012c66ef11da020e31df | [
"MIT"
] | null | null | null | # Installing Wings
Wings is the next generation server control plane from Pterodactyl. It has been rebuilt from the
ground up using Go and lessons learned from our first Nodejs Daemon.
::: warning
You should only install Wings if you are running **Pterodactyl 1.x**. Do not install this software
for previous versions of Pterodactyl.
:::
## Supported Systems
The following is a list of supported operating systems. Please be aware that this is not an exhaustive list,
there is a high probability that you can run the software on other Linux distributions without much effort.
You are responsible for determining which packages may be necessary on those systems. There is also a very
high probability that new releases of the supported OSes below will work just fine, you are not restricted to
only the versions listed below.
| Operating System | Version | Supported | Notes |
| ---------------- | ------- | :----------------: | ----------------------------------------------------------- |
| **Ubuntu** | 18.04 | :white_check_mark: | Documentation written assuming Ubuntu 18.04 as the base OS. |
| | 20.04 | :white_check_mark: | |
| **CentOS** | 7 | :white_check_mark: | |
| | 8 | :white_check_mark: | Note that CentOS 8 is EOL. Use Rocky or Alma Linux. |
| **Debian** | 9 | :white_check_mark: | |
| | 10 | :white_check_mark: | |
| | 11 | :white_check_mark: | |
| **Windows** | All | :x: | This software will not run in Windows environments. |
## System Requirements
To run Wings, you will need a Linux system capable of running Docker containers. Most VPS and almost all
dedicated servers should be capable of running Docker, but there are edge cases.
When your provider uses `Virtuozzo`, `OpenVZ` (or `OVZ`), or `LXC` virtualization, you will most likely be unable to
run Wings. Some providers have made the necessary changes for nested virtualization to support Docker. Ask your provider's support team to make sure. KVM is guaranteed to work.
The easiest way to check is to type `systemd-detect-virt`.
If the result doesn't contain `OpenVZ` or`LXC`, it should be fine. The result of `none` will appear when running dedicated hardware without any virtualization.
Should that not work for some reason, or you're still unsure, you can also run the command below.
```bash
dane@pterodactyl:~$ sudo dmidecode -s system-manufacturer
VMware, Inc.
```
## Dependencies
- curl
- Docker
### Installing Docker
For a quick install of Docker CE, you can execute the command below:
```bash
curl -sSL https://get.docker.com/ | CHANNEL=stable bash
```
If you would rather do a manual installation, please reference the official Docker documentation for how to install Docker CE on your server. Some quick links
are listed below for commonly supported systems.
- [Ubuntu](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce)
- [CentOS](https://docs.docker.com/install/linux/docker-ce/centos/#install-docker-ce)
- [Debian](https://docs.docker.com/install/linux/docker-ce/debian/#install-docker-ce)
::: warning Check your Kernel
Please be aware that some hosts install a modified kernel that does not support important docker features. Please
check your kernel by running `uname -r`. If your kernel ends in `-xxxx-grs-ipv6-64` or `-xxxx-mod-std-ipv6-64` you're
probably using a non-supported kernel. Check our [Kernel Modifications](../../../daemon/0.6/kernel_modifications.md) guide for details.
:::
#### Start Docker on Boot
If you are on an operating system with systemd (Ubuntu 16+, Debian 8+, CentOS 7+) run the command below to have Docker start when you boot your machine.
```bash
systemctl enable --now docker
```
#### Enabling Swap
On most systems, Docker will be unable to setup swap space by default. You can confirm this by running `docker info` and looking for the output of `WARNING: No swap limit support` near the bottom.
Enabling swap is entirely optional, but we recommended doing it if you will be hosting for others and to prevent OOM errors.
To enable swap, open `/etc/default/grub` as a root user and find the line starting with `GRUB_CMDLINE_LINUX_DEFAULT`. Make
sure the line includes `swapaccount=1` somewhere inside the double-quotes.
After that, run `sudo update-grub` followed by `sudo reboot` to restart the server and have swap enabled.
Below is an example of what the line should look like, _do not copy this line verbatim. It often has additional OS-specific parameters._
```text
GRUB_CMDLINE_LINUX_DEFAULT="swapaccount=1"
```
::: tip GRUB Configuration
Some Linux distros may ignore `GRUB_CMDLINE_LINUX_DEFAULT`. Therefore you might have to use `GRUB_CMDLINE_LINUX` instead should the default one not work for you.
:::
## Installing Wings
The first step for installing Wings is to ensure we have the required directory structure setup. To do so,
run the commands below, which will create the base directory and download the wings executable.
```bash
mkdir -p /etc/pterodactyl
curl -L -o /usr/local/bin/wings "https://github.com/pterodactyl/wings/releases/latest/download/wings_linux_$([[ "$(uname -m)" == "x86_64" ]] && echo "amd64" || echo "arm64")"
chmod u+x /usr/local/bin/wings
```
::: warning OVH/SYS Servers
If you are using a server provided by OVH or SoYouStart please be aware that your main drive space is probably allocated to
`/home`, and not `/` by default. Please consider using `/home/daemon-data` for server data. This can be easily
set when creating the node.
:::
## Configure
Once you have installed Wings and the required components, the next step is to create a node on your installed Panel. Go to your Panel administrative view, select Nodes from the sidebar, and on the right side click Create New button.
After you have created a node, click on it and there will be a tab called Configuration. Copy the code block content, paste it into a new file called `config.yml` in `/etc/pterodactyl` and save it.
Alternatively, you can click on the Generate Token button, copy the bash command and paste it into your terminal.
![example image of wings configuration](./../../.vuepress/public/wings_configuration_example.png)
::: warning
When your Panel is using SSL, the Wings must also have one created for its FQDN. See [Creating SSL Certificates](/tutorials/creating_ssl_certificates.html) documentation page for how to create these certificates before continuing.
:::
### Starting Wings
To start Wings, simply run the command below, which will start it in a debug mode. Once you confirmed that it is running without errors, use `CTRL+C` to terminate the process and daemonize it by following the instructions below. Depending on your server's internet connection pulling and starting Wings for the first time may take a few minutes.
```bash
sudo wings --debug
```
You may optionally add the `--debug` flag to run Wings in debug mode.
### Daemonizing (using systemd)
Running Wings in the background is a simple task, just make sure that it runs without errors before doing
this. Place the contents below in a file called `wings.service` in the `/etc/systemd/system` directory.
```text
[Unit]
Description=Pterodactyl Wings Daemon
After=docker.service
Requires=docker.service
PartOf=docker.service
[Service]
User=root
WorkingDirectory=/etc/pterodactyl
LimitNOFILE=4096
PIDFile=/var/run/wings/daemon.pid
ExecStart=/usr/local/bin/wings
Restart=on-failure
StartLimitInterval=180
StartLimitBurst=30
RestartSec=5s
[Install]
WantedBy=multi-user.target
```
Then, run the commands below to reload systemd and start Wings.
```bash
systemctl enable --now wings
```
### Node Allocations
Allocation is a combination of IP and Port that you can assign to a server. Each created server must have at least one allocation. The allocation would be the IP address of your network interface. In some cases, such as when behind NAT, it would be the internal IP. To create new allocations go to Nodes > your node > Allocation.
![example image of node allocations](../../.vuepress/public/node_allocations.png)
Type `hostname -I | awk '{print $1}'` to find the IP to be used for the allocation. Alternatively, you can type `ip addr | grep "inet "` to see all your available interfaces and IP addresses. Do not use 127.0.0.1 for allocations.
| 47.786885 | 345 | 0.708519 | eng_Latn | 0.996092 |
f47e7f7b905361c59745817630c61a38d8d46f01 | 45,504 | md | Markdown | doc/changelog.md | AndreKR/gitahead | 3e9450172db881b0ed0404fae39771e533ebb1b5 | [
"MIT"
] | null | null | null | doc/changelog.md | AndreKR/gitahead | 3e9450172db881b0ed0404fae39771e533ebb1b5 | [
"MIT"
] | 1 | 2019-02-12T14:11:58.000Z | 2019-02-12T14:11:58.000Z | doc/changelog.md | AndreKR/gitahead | 3e9450172db881b0ed0404fae39771e533ebb1b5 | [
"MIT"
] | null | null | null | ### v2.5.3 - 2019-01-24
#### Added
* Added support for ed25519 SSH identity files.
#### Fixed
* Fixed possible crash after starred commits have been added and removed.
----
### v2.5.2 - 2019-01-06
#### Fixed
* (Win) Fixed status diff performance regression on indexes built with older versions of GitAhead.
* (Win) Fixed installer shortcuts to be relative to the actual install location instead of hard coded to the default location.
----
### v2.5.1 - 2018-12-22
#### Fixed
* (Win) Restored license agreement to the installer.
* (Win) Avoid restart after the VS redistributable is installed.
* (Mac) Fixed failure to launch on macOS 10.12 and 10.13.
#### Changed
* Truncate tool bar badge numbers at 999.
----
### v2.5.0 - 2018-12-13
#### Changed
* GitAhead is now open source and free for everybody! See the new gitahead.com for details.
----
### v2.4.7 - 2018-11-20
#### Added
* (Mac) Added basic support for touch bar.
* (Win) Added support for symlinks when core.symlinks=true.
#### Fixed
* Fixed inverted plus/minus color in default theme.
* Disallow multiple remote accounts of the same kind with the same username.
----
### v2.4.6 - 2018-09-26
#### Fixed
* Fixed crash on initializing new repository.
* Fixed port conflict with some web servers.
----
### v2.4.5 - 2018-09-24
#### Fixed
* Fixed performance issues on some large repositories.
* (Mac) Fixed compatibility issues with macOS Mojave.
#### Changed
* Changed default merge operation to commit.
----
### v2.4.4 - 2018-09-11
#### Fixed
* Improved performance on some very large repositories.
* (Win) Fixed regression in looking up credentials in some cases.
### v2.4.3 - 2018-08-31
#### Fixed
* Fixed possible crash on fetch.
* Fixed additional issue in extracting repo name from URL.
* Fixed performance issue when browsing tree view.
* Fixed describe performance issues on very large repositories.
* (Linux) Fixed failure to lookup username from credential helper.
#### Changed
* Changed default font sizes slightly.
* Changed external editor setting to set "gui.editor" instead of "core.editor".
* Changed credential helper implementation to be more consistent with git.
* (Linux) Execute application in context of user's default shell when launched from desktop.
----
### v2.4.2 - 2018-08-23
#### Added
* Added context menu item to rename branches.
* Added option to set external editor (core.editor).
* Respect GIT_EDITOR, core.editor, VISUAL, and EDITOR (in that order) when editing in an external editor.
#### Fixed
* Fixed failure to rename branches.
* Fixed clone dialog repo name detection to use everything up to .git.
#### Changed
* Moved external tools options into their own settings page.
----
### v2.4.1 - 2018-08-17
#### Added
* Added option to disable credential storage.
#### Fixed
* Fixed remote host connection issues.
----
### v2.4.0 - 2018-07-20
#### Added
* Added empty main window interface for opening repositories.
* Added options to merge dialog to require or disable fast-forward.
#### Fixed
* Fixed possible file corruption when discarding or staging individual hunks.
* Fixed intermittent invalid status diff and possible crash.
#### Changed
* Show repository sidebar by default.
* Allow last tab to be closed.
* Start application with an empty main window instead of repository chooser.
* Remember open tabs when exiting the app by closing the last window.
----
### v2.3.7 - 2018-07-10
#### Fixed
* Fixed possible hang on start.
* Fixed regression in adding recent repositories with the same name as an existing repository.
----
### v2.3.6 - 2018-07-05
#### Added
* Added option to stop prompting to stage directories and large files.
* (Beta) Added repository sidebar.
#### Fixed
* Improved performance of staging a large number of files.
* Added missing [remote rejected] push errors to the log.
* (Win) Copy file paths with native separators.
#### Changed
* Increased threshold for prompt to stage large files by a factor of 10.
----
### v2.3.5 - 2018-06-21
#### Fixed
* Fixed ignore issues on some patterns.
* Fixed performance issue when opening settings dialogs.
* Fixed possible crash on changing window settings after a window has been closed.
* Improve commit list performance when sorting by date and the graph is disabled.
#### Changed
* Made the diff view drop shadow more subtle.
----
### v2.3.4 - 2018-06-14
#### Fixed
* Fixed selection issues and possible crash on reference view filtering.
#### Changed
* Don't show the license error dialog on application start when the error is the result of a network error.
* Avoid connecting to remote accounts every time the repository chooser is opened.
----
### v2.3.3 - 2018-05-27
#### Added
* Added plugin API to get hunk lexer.
#### Fixed
* Fixed interactivity during rebase.
* Fixed line length plugin issue on some empty lines.
* (Mac) Fixed style regression on macOS native theme.
#### Changed
* Disabled most plugin checks on unstyled hunks.
* Automatically add /api/v3 to GitHub Enterprise URLs.
* Removed bundle git-lfs. LFS integration requires git-lfs to be installed separately.
----
### v2.3.2 - 2018-05-14
#### Added
* Added command line option to populate the pathspec filter.
* Added notes about authentication with GitHub and GitLab by personal access token.
#### Fixed
* Fixed inverted tab plugin options.
* Disable pull on bare repositories.
* Fixed spurious prompts to decrypt SSH identity file after invalid public key connection error.
* (Win) Fixed treatment of junction points to match command line git.
----
### v2.3.1 - 2018-05-03
#### Added
* Added context menu item to set or unset the executable mode of files after they have been staged. This is equivalent to the 'git update-index --chmod=(+|-)x' command.
#### Fixed
* Fixed file discard button to remove directories.
* Fixed trailing whitespace plugin error for files that have multiple line ending characters (e.g. CRLF).
* Fixed file context menu performance issue.
* (Linux) Fixed failure to detect libsecret at runtime on Ubuntu 18.04.
----
### v2.3.0 - 2018-05-01
#### Added
* Added plugin API to flag errors in diffs.
* Added several built-in plugins.
* API documentation can be found at Help -> Plugin Documentation...
* Prompt before staging large binary files.
* Added option to create and clone repositories bare.
* Added option to deinitialize LFS for the repository.
#### Fixed
* Fixed failure to prompt for HTTPS credentials when the remote URL scheme is 'http'.
----
### v2.2.2 - 2018-04-18
#### Added
* Added menu item to remove all LFS locks.
#### Fixed
* Fixed failure to execute LFS filters when git-lfs isn't on the PATH.
* (Win) Fixed indexer crash regression.
#### Changed
* Store all commit list options as repository-specific settings instead of global settings.
* Changed the behavior of the reference interface to keep the HEAD as the current reference when 'show all branches' is enabled.
----
### v2.2.1 - 2018-04-10
#### Added
* Added size label for binary files in the diff view.
* Added before and after version of images in the diff view.
* Added commit list context menu items to check out local branches that point to the selected commit.
#### Fixed
* Improved performance of filtering by pathspec when the search index is enabled.
* Fixed failure to execute LFS pre-push hook when git-lfs isn't on the PATH.
* Fixed performance regression when opening the config dialog.
* (Win) Fixed badge color of unselected items in the dark theme.
----
### v2.2.0 - 2018-03-27
#### Added
* Added improved support for LFS.
* Show icon of non-image binary files in diff view.
* (Win) Enable smooth scaling by automatically detecting HiDPI displays.
#### Fixed
* Fixed failure to fully stage symlinks.
* Fixed failure to refresh correctly after amending the first commit.
----
### v2.1.3 - 2018-03-06
#### Fixed
* Fixed failure to clone when the URL contains leading or trailing whitespace.
* Fixed failure to identify the correct SSH key file when using an ssh: style URL.
* Fixed unexpected quit after adding a valid license when the evaluation is expired.
* (Win) Fixed intermittent indexer crash.
----
### v2.1.2 - 2018-03-01
#### Added
* Added privacy statement.
#### Fixed
* Fixed crash on conflicted diffs after all conflicts have been resolved.
* Fixed possible indexer crash.
* (Win) Fixed failure to connect to GitHub through HTTPS on Windows 7.
----
### v2.1.1 - 2018-02-27
#### Added
* Allow 'Remove Untracked Files' context menu actions to remove untracked directories.
* (Win) Generate crash dump file when the indexer process crashes.
#### Changed
* Show commit list times in the local time zone. The detail view still shows time in the author's time zone offset from UTC.
----
### v2.1.0 - 2018-02-15
#### Added
* Allow tags to be pushed individually.
* Added context menu option to clean untracked files from the working directory.
* Automatically scale down images that are wider than the diff area.
* Added options to collapse added and deleted files. All files now start expanded by default.
* (Linux) Added libsecret credential storage for distros that have it.
#### Fixed
* Fixed regression in visiting range links from the log.
* Print error to log when the pre-push hook fails.
* Print error to log when stage fails because of a filter error.
* Fixed staged state of files that can't be staged because of a filter error.
* Fixed failure to execute some filters (including the LFS ones) that require the working directory be set to the working directory of the repository.
* Fixed interface issues in some themes.
* Fixed failure to update graph during refresh when the refresh is triggered by changes to the working directory.
#### Changed
* Changed selection in the commit list to allow only two commits to be selected with Ctrl+click (or Cmd+click on Mac) instead of a contiguous range with Shift+click. Selecting a contigous range doesn't make sense when it includes commits on different branches.
----
### v2.0.6 - 2018-01-23
#### Fixed
* Disable editing and hide caret on untracked files in the diff view.
* Fixed failure to cancel remote transfer in some cases.
* Fixed failure to report error after invalid offline registration attempt.
----
### v2.0.5 - 2018-01-17
#### Added
* Added theme menu item to edit the current theme.
#### Fixed
* Fixed crash on some diffs.
* (Win) Fixed issues with localized repository and config path names.
#### Changed
* Changed HEAD reference color in dark theme.
----
### v2.0.4 - 2018-01-10
#### Added
* Allow more SSL errors to be ignored.
* Remember ignored SSL errors and don't continue to prompt about them.
* (Mac) Added 'Open in GitAhead' services context menu action.
#### Fixed
* Fixed crash on conflicted files that only exist on one side of the merge.
* (Win) Install updates in the previous install location.
----
### v2.0.3 - 2018-01-05
#### Added
* Remember 'no commit' setting in the merge dialog.
* Added more specific error message on failure to establish SSL connection.
* Allow certain SSL errors to be ignored.
* (Win) Added explorer context menu shortcuts to installer.
#### Fixed
* Fixed integration with GitLab. A personal access token is now required for authentication.
----
### v2.0.2 - 2017-12-22
#### Fixed
* Fixed issues with offline licenses.
----
### v2.0.1 - 2017-12-20
#### Added
* Added menu indicator to 'Pull' button. The right third of the button now pops up the menu.
* Added option to disable filtering of non-existant paths from the recent repository list.
* Respect the 'gui.encoding' setting when loading and saving diff and editor text.
* Added option to set the character encoding in the 'Diff' configuration panels.
* (Win) Allow most external tools to run without shell evaluation.
#### Fixed
* Fixed deselection by Ctrl+click in the file list.
* (Win) Fixed HEAD reference selection color in commit list.
#### Changed
* Changed several theme keys to be more consistent. Added another sample theme.
----
### v2.0.0 - 2017-12-14
#### <span style='color: red'>GitAhead 2.0 requires a new license!</span>
* All current commercial users can upgrade their license at no cost. Check your email for details.
* Non-commercial users can sign up for a free non-commercial account through a link in the app.
* The two week evaluation period will reset for all users, so a new license isn't required immediately.
* If you have any questions please contact us at [[email protected]](mailto:[email protected]).
#### Added
* Added support for custom themes.
#### Fixed
* Fixed failure to cleanup after conflict resolution in some cases.
----
### v1.4.14 - 2017-12-11
#### Fixed
* Fixed regression in selecting the status (uncommitted changes) row after the working directory changes.
* Fixed failure to persist conflict resolution selections after refresh.
----
### v1.4.13 - 2017-12-06
#### Fixed
* Fixed possible crash on newly initialized repos.
* Fixed crash on submodule update after fetch when new submodules are added.
#### Changed
* Disallow automatic update to new major versions.
----
### v1.4.12 - 2017-11-30
#### Added
* Added tag icon next to tag reference badges.
* Added option to show the full repository path in the title bar.
#### Fixed
* Remember the size and location of the start dialog.
* Reset selection more consistently to the HEAD branch when it changes.
----
### v1.4.11 - 2017-11-21
#### Added
* Added buttons to conflict hunks to choose 'ours' or 'theirs' changes.
* Disallow staging conflicted files that still contain conflict markers.
* Log all merge abort actions.
* Added log text to explain conflict resolution process.
#### Fixed
* Fixed failure to show stashes or possible crash in some cases.
* Fixed regression in updating commit list reference badges when references are added and deleted.
### v1.4.10 - 2017-11-10
#### Added
* Changed background color of conflict badges.
* Added status text to indicate the number of unresolved conflicts.
* Added conflict hunk background colors on ours and theirs lines.
#### Fixed
* Fixed issue with spaces in .gitignore files.
* Fixed a small memory leak.
* Fixed conflict hunk check state.
* Fixed conflict hunk line number issue.
* (Mac) Fixed issue that prevented the disk image from mounting on systems older than macOS 10.12.
----
### v1.4.9 - 2017-11-06
#### Added
* Added context menu action to remove multiple untracked files at once.
* Show detached HEAD label at a specific tag if any tags point directly to the commit.
#### Fixed
* Fixed cancel during resolve.
* Fixed regression is showing the detached HEAD label in the commit list.
* Fixed failure to immediately index some commits if the indexer is running while references are updated.
* Fixed failure to get more than the first page of GitHub repositories.
----
### v1.4.8 - 2017-10-30
#### Added
* Added prompt before staging directories.
* Substitute emoji for emoji short codes in commit messages.
* Added missing extension mapping for CoffeeScript.
#### Fixed
* Sort conflicts to the top unconditionally.
* Automatically select the status row when the HEAD changes.
* Fixed commit list scroll performance on repositories with many refs.
* (Win) Search for git bash in default install location.
* (Win) Added warning dialog when an external tool fails because bash can't be found.
#### Changed
* Changed the behavior of double-click in the file list to open the external diff/merge tool if one is enabled.
----
### v1.4.7 - 2017-10-20
#### Added
* Added context menu actions to apply, pop, and drop stashes.
#### Fixed
* Fixed issues related to revert and cherry-pick that result in a conflict.
* Fixed merge abort menu item to update more consistently.
----
### v1.4.6 - 2017-10-17
#### Added
* Added advanced option to specify a custom URL for hosting providers.
#### Fixed
* Fixed failure to execute pre-push hook (including LFS) in some cases.
* Fixed error messages on console.
----
### v1.4.5 - 2017-10-10
#### Added
* Added option to merge without committing.
* Added context menu actions to stage/unstage multiple files at once.
#### Fixed
* Fixed usability issues in the custom external tools editor interface.
* Don't yield focus to the commit message editor when staging files with the keyboard.
* Fixed state of stash toolbar button and menu item when the working directory is clean except for untracked files.
* Fixed failure to cherry-pick and revert when the working directory is dirty.
* Fixed hang on some diffs.
* Fixed garbage remote transfer rate when the elapsed time is too small to measure.
* (Mac) Fixed several issues on macOS 10.13.
----
### v1.4.4 - 2017-09-22
#### Added
* Style editor content after saving a new file.
#### Fixed
* Fixed style of tag reference badges that point to the HEAD.
* Fixed regression in showing images in the diff area.
* Fixed regression in indexing by lexical class.
* (Win) Fixed regression in staging "exectuable" files.
----
### v1.4.3 - 2017-09-15
#### Added
* Added reference badge for the detached HEAD state.
* Added commit list context menu item to rebase a specific commit.
* Added remote branch context menu item to create a new local branch that tracks the remote branch.
#### Fixed
* Fixed bugs in staging file mode changes.
* Fixed failure to look up the correct host name for SSH URLs that use an alias from the SSH config file.
#### Changed
* Start indexer process with lower priority.
----
### v1.4.2 - 2017-09-11
#### Added
* Draw HEAD reference badges darker and bold.
* Added dotted graph edge from the status item to the current HEAD commit.
* Added an option to show the status item even when the workin directory is clean.
* Added commit list context menu item to merge a specific commit.
----
### v1.4.1 - 2017-09-05
#### Added
* Added merge and rebase context menu actions to reference view references.
#### Fixed
* Fixed SSH host name lookup to use the first matching host instead of the last.
* Fixed regression in loading the initial diff on repo open when the working directory scan is very fast.
* (Win) Fixed failure to look up stored passwords from the credential manager for HTTPS URLs that don't include the username.
----
### v1.4.0 - 2017-08-30
#### Added
* Added context menu action to delete tags.
* Added option to show all references in the commit list.
* Added option to change commit list sort order.
* Added option to turn off the commit list graph.
* Expand the reference view by clicking in its label area.
* Jump to the selected reference by clicking on its name in the reference view.
#### Fixed
* Fixed dark theme appearance of the advanced search button.
* Don't reset the care position/selection when the editor window loses focus.
#### Changed
* Changing the selected reference back to the current (HEAD) branch no longer causes the working directory to be rescanned.
----
### v1.3.13 - 2017-08-18
#### Fixed
* Fixed file list scroll range bug.
* Fixed unresponsive interface on pull followed by submodule update.
* Fixed invalid commit list context menu actions when multiple commits are selected.
* Fixed diff view update bug when selecting a different branch in the reference view.
* Disable diff view expand buttons for files that don't have any hunks.
* Fixed file context menu crash in a newly initialized repository.
* Fixed indexer crash on failure to diff binary blobs that don't appear to be binary based on the heuristic.
* Fixed crash when the .git/gitahead directory gets created with the wrong permissions.
#### Added
* Added options to sort file list ascending or descending.
* Added commit list context menu option to star commits.
* Added commit list context menu action to create a new branch at an arbitrary commit.
* Checkout on double-click in the reference view.
#### Changed
* Show renamed and copied files expanded by default in diff view.
----
### v1.3.12 - 2017-08-04
#### Fixed
* Fixed intermittent indexer crash.
* (Win) Fixed failure to install indexer component.
* Fixed external diff/merge tool display bug on paths containing spaces.
#### Added
* Added option to sort file list and diff view by status instead of name.
----
### v1.3.11 - 2017-07-20
#### Fixed
* Fixed missing status badges in tree view.
* Fixed search index locking issue.
* Clean up search index temporary files after crash.
* Fixed performance issue on some repositories.
#### Added
* Added syntax highlighting for .less files.
* Added basic support for LFS clean/smudge filter and pre-push hook.
* (Linux) Install .desktop file and icons for better desktop integration.
----
### v1.3.10 - 2017-07-06
#### Added
* Added ability to search by wildcard query.
* Added ability search date ranges with before: and after: operators.
#### Changed
* The search index now stores dates in a locale-agnostic format.
* The indexer now runs as an external worker process.
----
### v1.3.9 - 2017-06-26
#### Fixed
* Elide reference badges in the detail view instead of forcing the window to grow.
* Update reference badges in the detail view when a reference is added/deleted/updated.
* Fixed crash in checkout dialog when there aren't any references.
* Fixed failure to push from 'Push To...' dialog when opened from a log link.
* Push to the remote reference that corresponds to the current branch's upstream by default.
#### Added
* Added clear button to all fields in the advanced search popup.
----
### v1.3.8 - 2017-06-21
#### Fixed
* Fixed failure to connect to the update server since v1.3.6.
#### Added
* Added file context menu action to open files in the default file browser.
----
### v1.3.7 - 2017-06-19
#### Fixed
* Disallow removing command line tools that weren't installed by the app.
* Delete remote branch by the name of the upstream instead of the name of the local branch.
#### Added
* Added an interface for creating annotated tags.
* Show annotated tag author and message in hover text in the reference view.
* Added context menu item to delete local branches from the reference view.
* Added explanatory tool tips to the advanced search popup.
* Added an advanced option in the 'Push To...' dialog to push to a different remote name.
----
### v1.3.6 - 2017-06-12
#### Fixed
* Fixed failure to remember the new default repository path when accepting the init repo dialog.
* Fixed regression in refreshing the status diff after creating a new file in the working directory.
#### Added
* Added init/deinit column to submodule configuration table.
* Allow context menu to discard changes to deleted files.
* Enable merging and rebasing on remote branches and tags.
* Added new push hints for the case where no remotes are configured for the current branch.
----
### v1.3.5 - 2017-05-30
#### Fixed
* Fixed staged state of dirty submodules. Disable dirty submodule check boxes.
* Fixed crash on submodule update when there are modules that have been inited but don't exist on disk.
* (Win) Fixed search completion popup layout.
* (Win) Fixed intermittent crash when files in ignored directories change on disk.
#### Added
* Added context menu item to discard changes in selected files.
* (Linux) Added command line installer.
#### Changed
* Allow remote host accounts to be collapsed in the repository browser.
* Moved new branch dialog upstream option into an advanced settings section.
----
### v1.3.4 - 2017-05-23
#### Fixed
* Fixed automatic scaling issue.
----
### v1.3.3 - 2017-05-22
#### Added
* Added automatic refresh when the working directory changes. Removed refresh on activation.
* Added prepopulated commit message when staging files and the commit message editor is empty.
#### Fixed
* Fixed the initial directory of the clone dialog browse button.
* Fixed crash when staging/unstaging the hunk of an untracked file.
* (Win) Scale application uniformly (although less smoothly) when font scaling is greater than 100%.
#### Changed
* Show the "Commit" button as the default button.
* Double-click in the commit list no longer opens a new tab.
----
### v1.3.2 - 2017-05-10
#### Added
* Added dialog to prompt the user to choose a theme on the first run.
#### Fixed
* Fixed performance issues on some large repositories.
* Fixed commit list graph and reference bugs on some repositories.
* Fixed several dialog default buttons in the dark theme.
* Fixed possible inconsistent state of the search index if the write is interrupted.
----
### v1.3.1 - 2017-05-02
#### Added
* Added a reference badge in the detail view that shows the number of commits since the latest tag. This is similar to git describe.
* Added gnome-keyring support for securely storing passwords on Linux.
* Added copy button next to the short id in the detail view.
* Added automatic refresh when the application loses and regains focus.
* Added inline autocomplete to pathspec filter field.
#### Fixed
* Fixed crash on refresh after a tab is closed.
* Fixed indexer performance issues.
* Fixed possible failure to respect indexer term limits.
#### Changed
* Reimplemented clone/init dialog as a wizard to walk through each step.
----
### v1.3.0 - 2017-04-21
#### Added
* Added redesigned interface for choosing the current reference and setting the pathspec filter.
#### Fixed
* Fixed crash on submodule init.
* Fixed click issue in the star area of the "uncommitted changes" row.
----
### v1.2.6 - 2017-04-13
#### Added
* Added ability to star favorite commits.
* Added option to delete branch from remote when deleting a local branch.
#### Fixed
* Don't collapse log entries when clicking on a link in the log.
* Fixed possible crash during indexing.
* (Linux) Fixed possible crash on start.
----
### v1.2.5 - 2017-04-05
#### Added
* Added a dark theme.
#### Fixed
* Fixed possible crash during indexing.
* (Win) Fixed failure to start with a valid evaluation license in some cases.
----
### v1.2.4 - 2017-03-15
#### Added
* Show kind of reference in the reference list label.
#### Fixed
* Fixed possible crash on diffs with no newline at the end of the file.
----
### v1.2.3 - 2017-03-10
#### Added
* Highlight changed words in diff output.
* Added advanced search button to the bottom of the search completion popup.
* Added prompt to add a new local branch when checking out a remote tracking branch.
* Respect 'GIT\_SSL\_NO\_VERIFY' environment variable and 'http.sslVerify' config setting when validating certificates.
#### Fixed
* Fixed diff error near the end of files with no trailing newline.
* Fixed possible destructive changes when clicking on a stale 'abort merge' link.
* Fixed incorrect error message when a merge fails because of conflicts with uncommitted changes.
----
### v1.2.2 - 2017-03-02
#### Added
* Added drag-and-drop into the diff area to copy files into the repository.
* Added menu action to create new files with the built-in editor.
* Added interface for creating new files and copying files into newly initialized repositories.
* Added initial repository setup actions to repository chooser. Made the remote host actions more obviously clickable.
#### Fixed
* Fixed regression in staging directories.
* Restore missing file context menu items.
* Fixed failure to amend the very first commit in a repository.
* Fixed failure to load the correct image file from the selected commit.
* (Mac) Fixed regression in scrolling with the scroll wheel over untracked files.
----
### v1.2.1 - 2017-02-24
#### Added
* Added basic GitLab integration.
#### Fixed
* Fixed performance issues in commit editor.
* Fixed crash when adding a new branch from the config dialog in a newly initialized repository.
* Fixed crash when clicking on the 'Edit Config File...' button in the global settings dialog and the global settings file (~/.gitconfig) doesn't exist.
* Open remote account dialog with the correct kind when double-clicking on one of the account kinds in the repository chooser.
----
### v1.2.0 - 2017-02-17
#### Added
* Added external edit, diff, and merge context menu actions.
#### Fixed
* Disable minus button when an error node is selected in the repository chooser remote list.
* Fixed wrong error message in license error dialog.
* (Win) Fixed advanced search button position.
#### Changed
* Changed settings and config dialogs to reflect newly added external tool settings.
----
### v1.1.4 - 2017-02-04
#### Fixed
* Improved performance when selecting uncommitted changes with modified submodules.
#### Changed
* (Linux) Changed application style to be more consistent.
----
### v1.1.3 - 2017-01-31
#### Fixed
* Fixed failure to stage/unstage deleted files.
----
### v1.1.2 - 2017-01-25
#### Fixed
* Fixed pathological log view performance on checkout when many files are modified.
* Fixed layout of reference badges in commit list.
* (Win) Fixed clipped file name in diff view.
----
### v1.1.1 - 2017-01-16
#### Fixed
* (Linux) Fixed 'invalid SSL certificate' error.
* (Win) Fixed broken context menu on right-click on selected files in the file list.
----
### v1.1.0 - 2017-01-05
#### Added
* Added new reference popup with a tabbed interface and a field to filter by name.
#### Fixed
* Fixed failure to remove config section headers when the last key is removed.
* Queue up new fetch, pull, push and submodule update requests when an existing asynchronous operation is ongoing instead of silently dropping them.
----
### v1.0.3 - 2016-12-27
#### Fixed
* Fixed failure to authenticate with SSH when the public key file is missing.
* Fixed failure to remove recent repositories when clicking the minus button.
* Fixed crash on navigating forward through history to the HEAD branch when there are no uncommitted changes.
----
### v1.0.2 - 2016-12-06
#### Added
* Added missing syntax highlight file extension mappings for several languages.
* Added more specific error message for cases where the SSH identity file isn't found.
#### Fixed
* (Win) Fixed slight visual issue with multiple items selected in the commit list.
----
### v1.0.1 - 2016-12-02
#### Added
* Remember the maximum size of the file list when the splitter is moved.
* Added missing Fortran file extension mappings.
#### Fixed
* Fixed several history issues.
* Fixed failure to authenticate with SSH key when the server also supports username/password authentication.
----
### v1.0.0 - 2016-11-28
#### Fixed
* Fixed several conflict workflow issues.
* Fixed failure to prompt to save on editor close.
* Fixed crash on diff after files change on disk.
* Fixed several file context menu issues.
* Fixed some cases of pathological performance on filter by pathspec.
* (Mac) Fixed failure to select the current item in the pathspec popup when pressing return/enter.
#### Added
* Split conflicted files into hunks around conflict markers.
* Abort merge with reset --merge semantics instead of --hard semantics.
* Truncate blame message text with ellipsis instead of clipping at the bottom.
----
### v0.9.3 - 2016-11-14
#### Fixed
* Fixed wrong error message in some cases.
* Fixed regression when staging or discarding hunks.
* (Win) Fixed proxy issue when connecting to remotes.
----
### v0.9.2 - 2016-11-09
#### Added
* Added diff view context menu option to filter history by the selected path.
#### Fixed
* Fixed issues related to reading the merge message from disk.
* Fixed regression in scrolling to find matches in the diff view.
#### Changed
* Changed pathspec placeholder text.
----
### v0.9.1 - 2016-11-06
#### Added
* Order all conflicts first.
* Show the merge head in the commit list when merging.
* Added checkout context menu action to the diff view.
#### Fixed
* Only store credentials in the keychain after successful remote connection.
* Fixed commit list order for commits that happen at the same time (e.g. because of rebase).
* Fixed file list scroll bugs.
* Fixed crash on editor open in newly initialized repos.
* Fixed merge workflow to allow conflicts to be resolved and merges committed.
#### Changed
* Changed the meaning of selection in the file list to filter the diff view by the selected files.
----
### v0.9.0 - 2016-10-31
#### Added
* Added licensing scheme. Contact us to beta test a permanent license.
* Allow tabs to be reordered by dragging.
* Respect branch.<name>.rebase and pull.rebase when pulling.
* Search up through parent directories when opening a directory that doesn't contain a repository.
#### Fixed
* Disallow invalid branch names in the new branch dialog.
* Disable discard buttons for submodules.
* Read ~/.ssh/config to determine which identity file to load.
* Fixed password save regression.
* Fixed incorrect line endings after discarding hunks with filter settings (e.g. core.autocrlf).
#### Changed
* Disable warning about opening an invalid repository from the command line. It's more annoying than useful.
----
### v0.8.15 - 2016-10-18
#### Added
* Added ability to select and copy from the log view as both plain and rich text.
* Added drag-and-drop of repository folders into the main window to open in a new tab.
* (Mac) Added drag-and-drop of repository folders into the application icon to open.
* (Win) Added menu bar to editor windows. Introduces several missing shortcuts.
#### Fixed
* Fixed editor automatic scroll regressions.
* (Win) Fixed editor find regression.
----
### v0.8.14 - 2016-10-13
#### Added
* Added initial Linux beta build.
#### Fixed
* Fixed additional proxy auto-detection issue.
* Fixed some HTTPS authentication issues and incorrect errors.
----
### v0.8.13 - 2016-09-30
#### Added
* Auto-detect system proxy settings.
* (Win) Added VS2013 redistributables to the installer.
* (Win) Removed component page from the installer.
* (Win) Added check box to launch the application after the installer finishes.
#### Fixed
* Start merge/revert/cherry-pick/stash dialog without keyboard focus in the message editor.
* Fixed hang or bad performance on some hunks at the very edge of showing the horizontal scroll bar.
* Fixed failure to resize hunks after font size changes.
* Fixed failure to update submodules after pull when the pull results in a merge.
* Fixed loss of search filter results after a branch is updated.
* Fixed crash on newly initialized repositories with untracked files.
* Select head reference after the current reference is deleted.
----
### v0.8.12 - 2016-09-23
#### Added
* Added fake credentials implementation for Windows.
#### Fixed
* Fixed regression in commit list update after a remote reference changes.
* Fixed config dialog performance issue on repositories with many submodules.
* Fixed size of untracked file hunks in diff view.
* Fixed stale status diff after aborting a merge.
* (Win) Fixed stash crash on when no default signature is set.
* (Win) Fixed persistent file lock and several issues related to it.
----
### v0.8.11 - 2016-09-13
#### Added
* Added new Repository menu with Commit and Amend Commit actions.
#### Fixed
* Improved layout of long file names in diff view.
* Changed behavior of hunk check box to only collapse the hunk when clicked.
* Fixed performance issue during commit message editing with lots of staged files.
----
### v0.8.10 - 2016-08-30
#### Fixed
* Fixed possible crash on remote transfer cancel.
* Fixed regression in setting branch upstream from config dialog.
* Improved responsiveness of changing tabs and some other operations.
* Fixed hunks sometimes starting out collapsed after commit.
* Fixed commit list update after the current branch's upstream changes.
* Fixed responsiveness issues when the commit list changes.
* Fixed possible remote transfer thread hang after prompting for HTTPS credentials.
* (Win) Fixed crash on reference list navigation with arrow keys.
----
### v0.8.9 - 2016-08-23
#### Added
* Added submodule update dialog to update a subset of submodules and set parameters.
* Added Ok button to dismiss settings dialog.
#### Fixed
* Fixed intermittent crash during status calculation.
* Restore correct reference from history and log links.
* Improved responsiveness and memory usage on files with a large number of hunks.
----
### v0.8.8 - 2016-08-12
#### Added
* Update submodules recursively be default.
#### Fixed
* Fixed authentication hang when ssh-agent is running but doesn't have any keys.
* Fixed horizontal scroll of log view.
* Fixed layout and input validation in new branch dialog.
* Fixed crash on repositories that contain remote branches that don't refer to a valid remote.
----
### v0.8.7 - 2016-08-05
#### Added
* Enable reset for the head commit (e.g. for hard reset).
* Improved responsiveness after staging/unstaging with many modified/untracked files.
* Restore commit list selection after update instead of always selecting the first row.
#### Fixed
* Fixed failure to fetch annotated tags added to existing commits.
* Fixed commit list sometimes switching to newly added references instead of updating the currently selected reference.
* Fixed possible deadlock caused by a race between remote transfer cancel and credential lookup.
* Fixed regression in setting initial focus on the commit list.
* Fixed possible black background in blame margin.
* Fixed possible crash on tab close.
* Fixed checkout check box visibility regression in new branch dialog.
----
### v0.8.6 - 2016-07-29
#### Added
* Added tabbed interface.
#### Fixed
* Fixed binary detection and loading of untracked image files in the diff view.
* Fixed visual artifacts and warning message in the blame margin when the repository contains a single commit.
----
### v0.8.5 - 2016-07-18
#### Added
* Changed file context copy menu to include short, relative and long names.
* Added option to show a heat map in the blame margin (enabled by default).
#### Fixed
* Fixed crash on opening a bare repository.
* Fixed bare repository name in the title bar.
* Fixed crash on starting an asynchronous remote operation when another is already running.
* Fixed failure to reset scroll width after loading a new file in the editor.
----
### v0.8.4 - 2016-07-13
#### Added
* Load advanced search query items from the query string.
* Added completion suggestions for each field of the advanced search widget.
* Added context menu items to copy file name and path.
* Added option to update submodules after pull (disabled by default).
* (Win) Added basic crash logging.
----
### v0.8.3 - 2016-07-08
#### Fixed
* Disable branch checkout and creation when the HEAD is unborn.
* Fixed regression in staging untracked files.
* Fixed status and staging of files in newly inited repos.
* Allow commit on an unborn HEAD.
#### Added
* Show the unborn HEAD name in the title bar.
* Show the commit id in the title bar when the HEAD is detached.
* Added a 'Checkout' command to the commit list context menu.
* Added option to automatically fetch periodically (enabled by default).
* Added option to automatically push after commit (disabled by default).
----
### v0.8.2 - 2016-07-01
#### Fixed
* Fixed submodule update hang after the first submodule.
* Fixed crash when the Submodule->Open menu is activated without and active window.
* Fixed possible corruption of the git index when a checkout is triggerd during search indexing.
* (Win) Fixed initial location of settings dialog when opened at a specific index.
#### Added
* Allow hunks to be discarded individually.
* Allow about dialog version string to be selected.
* Added option to push all tags.
----
### v0.8.1 - 2016-06-28
#### Fixed
* Fixed crash on status diff cancel.
----
### v0.8.0 - 2016-06-28
#### Added
* Hunks can now be staged/unstaged individually.
* Added delete confirmation dialogs before deleting remotes and branches.
* Added warning when deleting branches that aren't fully merged.
#### Fixed
* Disallow deleting the current branch.
* Disallow accepting new remote dialog with empty name or URL.
#### Known Issues
* Blame doesn't update after changes to the editor buffer.
* Pathspec field auto-complete only works for the first path element.
* (Win) Editor windows don't have a menu bar or key bindings.
----
### v0.7.9 - 2016-06-10
#### Added
* Group remote branches by remote in the reference popup.
* Added hunk header edit buttons start editing at the location of the hunk.
* Added cherry-pick context menu action.
* Added repository special state (e.g. MERGING, REBASING, etc.) to the title bar.
#### Fixed
* Fixed regression in reporting errors from remote connections.
* (Win) Fixed crash when opening new editor windows.
* (Win) Fixed tree expansion indicator style changing after showing dialogs.
----
### v0.7.8 - 2016-06-05
#### Added
* Added parameter to 'Fetch/Pull From' dialogs to update existing tags.
* Connect to remotes asynchronously and throttle log updates to fix performance issues.
* Added option to init and update submodules.
* Added username field to HTTPS credentials prompt.
* Changed commit editor status text to include the total number of files.
* Added progress indicator and cancel button to asynchronous log entries.
* Allow clone to be canceled by clicking on the cancel button.
* Allow status diff to be canceled by clicking on the cancel button.
#### Fixed
* Fixed spurious update notification for existing tags that aren't really updated by fetch.
* Fixed line stats (plus/minus) widget resize issue.
* Fixed submodule update to ignore modules that haven't been inited.
* Removed superfluous timestamps from clone dialog log area.
* Fixed initial size of config dialog when opening it up at a specific index.
----
### v0.7.7 - 2016-05-29
#### Added
* Generate fake user name and email when they're not in settings.
* Added log hints to set user name and email and amend commit.
* Added repository configuration interface for general settings.
* Added global git configuration interface to global preferences.
* Added commit editor status label to indicate number of staged files.
* Added draggable splitter between file list and diff view.
* Added swift syntax highlighting.
#### Fixed
* Fixed initial visibility of staged files.
----
### v0.7.6 - 2016-05-24
#### Added
* Added better error reporting for failure to commit
#### Fixed
* Fixed check state of newly added files in the diff view
* Hide tree view during status update
* Fixed crash on submodule open when the submodule hasn't been inited
----
### v0.7.5 - 2016-05-23
#### Added
* Added warning prompt before trying to commit on a detached HEAD
* Added commit message sumary to log entries for commit and rebase
* Added markdown syntax highlighting
* Log stash and stash pop operations
* Added double-click to open submodule in tree view, file list and diff view
* Added fetch/pull/push from/to actions with a dialog to choose parameters
* Added log hints to set upstream branch on push and allow push without upstream
#### Fixed
* Collapse existing log entries when new top-level entries are added instead of when the log is made visible
* Keep log open while editing merge commit message
----
### v0.7.4 - 2016-05-16
#### Added
* Added submodule config panel
* Added submodule menu and changed layout and shortcuts of some existing menus
* Added submodule update
* Added link to open submodules from diff view
* Added tag context menu action in the commit list
#### Fixed
* Fixed spurious conflict on rebase with modified submodules
* Fixed dirty submodules always shown as staged
* Fixed toolbar remote button badge update when the HEAD is detached or the current branch doesn't have an upstream
* Removed all synchronization between the indexer and the main thread by abandoning the notion of restarting a running indexer. Now the main thread signals the indexer to cancel and waits for it to finish before restarting.
----
### v0.7.3 - 2016-05-05
#### Added
* Added disambiguation of identically named repositories in recent repository lists
* Added option (with shortcut) to checkout the currently selected branch
* Added daily check for updates when automatic update is enabled
* Added support for showing multiple roots in the commit list
* Show the upstream branch associated with a local branch even when it's not reachable from the local ref
#### Fixed
* Fixed lookup of credentials by percent encoded username (e.g. email address)
* Fixed deadlock on pull caused by a race between indexing starting after fetch and canceling indexing before fast-forward
----
### v0.7.2 - 2016-05-02
#### Added
* Added refresh item to view menu
#### Fixed
* Fixed performance issues related to large numbers of untracked/modified files
* (Mac) Removed install rpaths from Qt frameworks to placate Gatekeeper
* (Mac) Changed update mount point to a guaranteed unique temporary directory. Fixes update failure when the original download image is still mounted.
----
### v0.7.1 - 2016-04-28
#### Fixed
* Updated libgit2 to fix a large memory leak
----
### v0.7.0 - 2016-04-26
#### Known Issues
* Hunks can't be staged individually even though they have a check box
* Dirty submodules always show up as staged
* Pathspec field auto-complete only works for the first path element
* Blame doesn't update changed editor buffers
* (Windows) The style of tree expansion indicators sometimes changes after showing dialogs
* There are several other bugs and missing features in the issue tracker
* This list is by no means exhaustive...
| 27.313325 | 260 | 0.738067 | eng_Latn | 0.997219 |
f47e8ab86ddb43a3f10fb110668bb598796cddd1 | 1,652 | md | Markdown | wdk-ddi-src/content/ks/ns-ks-kse_node.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ks/ns-ks-kse_node.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ks/ns-ks-kse_node.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:ks.__unnamed_struct_5
title: KSE_NODE (ks.h)
description: The KSE_NODE structure specifies an event request on a specific node.
old-location: stream\kse_node.htm
tech.root: stream
ms.assetid: 89446165-cdc3-414d-bcce-f2c978d94547
ms.date: 04/23/2018
keywords: ["KSE_NODE structure"]
ms.keywords: "*PKSE_NODE, KSE_NODE, KSE_NODE structure [Streaming Media Devices], PKSE_NODE, PKSE_NODE structure pointer [Streaming Media Devices], ks-struct_701a51ab-90d7-47d6-8e40-bd30d0ddd7b9.xml, ks/KSE_NODE, ks/PKSE_NODE, stream.kse_node"
f1_keywords:
- "ks/KSE_NODE"
- "KSE_NODE"
req.header: ks.h
req.include-header: Ks.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- ks.h
api_name:
- KSE_NODE
targetos: Windows
req.typenames: KSE_NODE, *PKSE_NODE
---
# KSE_NODE structure
## -description
The KSE_NODE structure specifies an event request on a specific node.
## -struct-fields
### -field Event
A structure of type <a href="https://docs.microsoft.com/previous-versions/ff561744(v=vs.85)">KSEVENT</a> that identifies the requested event.
### -field NodeId
Specifies the node ID.
### -field Reserved
Reserved for system use. Should be set to zero.
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ks/ns-ks-ksp_node">KSP_NODE</a>
| 19.903614 | 244 | 0.710048 | eng_Latn | 0.292282 |
f47f14c588260c071a607838c461e0edf0a88315 | 11,482 | md | Markdown | content-repo/extra-docs/releases/19.4.2.md | JT-NL/content-docs | 6721a7953a058e38fb137c8199530e4fd555119a | [
"MIT"
] | 24 | 2019-12-05T20:22:50.000Z | 2022-02-24T14:54:03.000Z | content-repo/extra-docs/releases/19.4.2.md | JT-NL/content-docs | 6721a7953a058e38fb137c8199530e4fd555119a | [
"MIT"
] | 901 | 2019-12-05T16:07:04.000Z | 2022-03-31T13:39:26.000Z | content-repo/extra-docs/releases/19.4.2.md | JT-NL/content-docs | 6721a7953a058e38fb137c8199530e4fd555119a | [
"MIT"
] | 39 | 2019-12-05T15:52:34.000Z | 2022-02-24T14:54:06.000Z | ## Demisto Content Release Notes for version 19.4.2 (22301)
##### Published on 30 April 2019
### Integrations
#### 10 New Integrations
- __ANY.RUN__
ANY.RUN is a cloud-based sandbox with interactive access.
- __Carbon Black Enterprise Protection V2__
Carbon Black Enterprise Protection is a next-generation endpoint threat prevention solution to deliver a portfolio of protection policies, real-time visibility across environments, and comprehensive compliance rule sets in a single platform.
- __Cherwell__
Cherwell is a cloud-based IT service management solution.
- __Google BigQuery__
Google BigQuery is a data warehouse for querying and analyzing large databases.
- __Microsoft Graph Mail__
Microsoft Graph lets your app get authorized access to a user's Outlook mail data in a personal or organization account.
- __Microsoft Graph User__
Unified gateway to security insights - all from a unified Microsoft Graph User API.
- __OnboardingIntegration__
Creates mock email incidents using one of two randomly selected HTML templates. Textual content is randomly generated and defined to include some text (100 random words) and the following data (at least 5 of each data type): IP addresses, URLs, SHA-1 hashes, SHA-256 hashes, MD5 hashes, email addresses, domain names.
- __Symantec Management Center__
Symantec Management Center provides a unified management environment for the Symantec Security Platform portfolio of products.
- __FortiSIEM__
Search and update FortiSIEM events, and manage resource lists.
- __OPSWAT-Metadefender v2__
OPSWAT-Metadefender is a multi-scanning engine that uses 30+ anti-malware engines to scan files for threats.
#### 17 Improved Integrations
- __urlscan.io__
Added support for the __urlscan-get-http-transactions__ script.
- __ServiceNow__
Added an option to select the timestamp field to filter by when fetching incidents. Enforcement of the fetch incidents limit and last run.
- __CounterTack__
Added two commands.
- ___countertack-search-endpoints___
- ___countertack-search-behaviors___
- __Gmail__
Added two commands.
- ___gmail-list-filters___
- ___gmail-remove-filter commands___
- __Fidelis Elevate Network__
Fixed the _ioc_ filter in the ___fidelis-list-alerts___ command.
- __Atlassian Jira v2__
Improved handling of _IssueTypeName_ and _issueJson_ in the ___jira-create-issue___ command.
- __PagerDuty v2__
Added two commands.
- ___PagerDuty-get-incident-data___
- ___PagerDuty-get-service-keys___
- __Anomali ThreatStream__
Improved handling of partial responses from Anomali ThreatStream.
- __CrowdStrike Falcon Intel__
Fixed how dates are parsed in the ___cs-report___ command.
- __Intezer__
Several improvements to the ___file___ command.
- Added the _sha256_ argument.
- Invalid hashes are now regarded as a warning.
- __Palo Alto Networks Magnifier__
Fixed the integration name and logo.
- __Mail Sender (New)__
Improved error messages.
- __Palo Alto Networks Minemeld__
Fixed the integration display name.
- __Palo Alto Networks PAN-OS__
Added eight commands.
- ___panorama-list-edl___
- ___panorama-get-edl___
- ___panorama-create-edl___
- ___panorama-edit-edl___
- ___panorama-delete-edl___
- ___panorama-refresh-edl___
- ___panorama-register-ip-tag___
- ___panorama-unregister-ip-tag___
- __VirusTotal__
Added the _fullResponseGlobal_ parameter. The parameter determines whether to return all results, which can number in the thousands. If _true_, returns all results and overrides the _fullResponse_ and _long_ arguments (if they are set to "false") in a command. If _false_, the _fullResponse_ and _long_ arguments in the command determines how results are returned.
- __Palo Alto Networks WildFire__
- Improved the __file__ command.
- Added the _md5_ and _sha256_ arguments.
- Invalid hashes are now regarded as a warning.
- Improved the ___wildfire-report___ command.
- Added the _sha256_ argument.
- Deprecated the _hash_ argument.
- Added the __wildfire-get-sample__ command.
- __Zscaler__
Added the ___zscaler-sandbox-report___ command.
#### Deprecated
- __OPSWAT-Metadefender (Deprecated)__
Deprecated. Use the OPSWAT-Metadefender v2 integration instead.
---
### Scripts
#### 11 New Scripts
- __CherwellCreateIncident__
A sample script that creates an incident in Cherwell. The script wraps the __cherwell-create-business-object__ command in the Cherwell integration.
- __CherwellGetIncident__
A sample script that retrieves an incident from Cherwell. The script wraps the __cherwell-get-business-object__ command of the Cherwell integration.
- __CherwellIncidentOwnTask__
A sample script that links an incident to a task in Cherwell. The script wraps the __cherwell-link-business-object__ command of the Cherwell integration.
- __CherwellIncidentUnlinkTask__
A sample script that unlinks a task from an incident in Cherwell. The script wraps the __cherwell-unlink-business-object__ command of the Cherwell integration.
- __CherwellQueryIncidents__
A sample script that queries incidents from Cherwell. The script wraps the __cherwell-query-business-object__ command of the Cherwell integration.
- __CherwellUpdateIncident__
A sample script that updates an incident in Cherwell. The script wraps the __cherwell-update-business-object__ command of the Cherwell integration.
- __DBotPredictPhishingWords__
Predict text label using a pre-trained machine learning phishing model, and get the most important words used in the classification decision.
- __FileToBase64List__
Encode a file as base64 and store it in a Demisto list.
- __DemistoLeaveAllInvestigations__
Removes a user from all investigations of which they are involved in (clears the incidents in the left pane). Incidents that the user owns will remain in the left pane. Requires Demisto REST API integration to be configured for the server.
- __OnboardingCleanup__
Cleans up the incidents and indicators created by the __OnboardingIntegration__.
- __UrlscanGetHttpTransactions__
Provides the functionality to get the HTTP transactions made for a given URL using the UrlScan integration. To properly use this script, use it inside a playbook, and select to run it without a worker. This require less system resources in the polling action. In the playbook task that executes this script, go to the __Advanced__ section and select the __Run without a worker__ checkbox.
#### 12 Improved Scripts
- __CheckDockerImageAvailable__
Improved the script to work with older demisto/python images.
- __ParseEmailFiles__
- Improved email file type detection.
- Fixed an issue when EML files have special characters.
- __ADGetUser__
Enabled script execution with _Active Directory Query_ instances only.
- __CommonServerPython__
Added the _list_ type to raw_response in the ___raw_outputs___ command.
- __ExtractIndicatorsFromWordFile__
The automation executes as expected when the entry is a single object.
- __FetchFromInstance__
Improved script execution.
- __GenericPollingScheduledTask__
Added an option to pass CSV arguments and values to _pollingCommandArgName_.
- __ReadPDFFile__
Added an error when reading image files fails.
- __RunPollingCommand__
Added an option to pass CSV arguments and values to _pollingCommandArgName_.
- __ScheduleGenericPolling__
Added an option to pass CSV arguments and values to _pollingCommandArgName_.
- __UserEnrichAD__
Updated a dependency for the __activedir__ brand.
- __IsIPInRanges__
- Removed the condition tag.
- Improved description and of IP range input.
---
### Playbooks
#### 16 New Playbooks
- __Account Enrichment - Generic v2__
- Reduced indicator duplication.
- Improved task names, descriptions, input selectors, and auto-extract settings.
- The new version does not provide reputation.
- __Detonate File - ANYRUN__
Detonates one or more files using the ANY.RUN sandbox integration. Returns relevant reports to the War Room, and file reputations to the context data. All file types are supported.
- __Detonate File From URL - ANYRUN__
Detonates one or more remote files using the ANY.RUN sandbox integration. Returns relevant reports to the War Room, and file reputations to the context data. This type of analysis works only for direct download links.
- __Detonate URL - ANYRUN__
Detonates one or more URLs using the ANY.RUN sandbox integration. Returns relevant reports to the War Room, and URL reputations to the context data.
- __Domain Enrichment - Generic v2__
- Reduced indicator duplication.
- Improved task names, descriptions, and auto-extract settings.
- The new version does not provide reputation.
- __Email Address Enrichment - Generic v2__
- Reduced indicator duplication.
- Improved playbook performance and execution.
- The new version does not provide reputation.
- __Endpoint Enrichment - Generic v2__
- Reduced indicator duplication.
- Improved task names and descriptions, and auto-extract settings.
- Improved playbook performance and execution, and DT selector implementation.
- Removed a deprecated SentinelOne integration.
- __Entity Enrichment - Generic v2__
Improved playbook and sub-playbook performance and execution.
- __Entity Enrichment - Phishing v2__
Customized for generic phishing investigations to avoid enrichment of irrelevant entities.
- __File Enrichment - Generic v2__
- Reduced indicator duplication.
- Removed redundant sub-playbooks.
- Simplified playbook structure and conditions.
- The new version does not provide reputation.
- __IP Enrichment - Generic v2__
- Added two separate sub-playbooks; one for internal IPs and one for external IPs.
- The new version does not provide reputation.
- __IP Enrichment - External - Generic v2__
- Added a new generic playbook for external IP enrichment
- The new playbook does not provide reputation.
- __IP Enrichment - Internal - Generic v2__
- Added a new generic playbook for internal IP enrichment
- The new playbook does not provide reputation.
- __PhishingDemo-Onboarding__
This playbook is part of the on-boarding experience, and focuses on phishing scenarios. To use this playbook, you'll need to enable the __on-boarding__ integration and configure incidents of type _Phishing_. For more information, refer to the on-boarding walkthroughs in the help section.
- __Phishing Investigation - Generic v2__
Improved entity enrichment to avoid enrichment of irrelevant entities.
- __URL Enrichment - Generic v2__
- Reduced indicator duplication.
- Removed reputation commands.
- Simplified playbook structure and implementation.
- The new version does not provide reputation.
#### 5 Improved Playbooks
- __Detonate File - Generic__
Added the __ANYRUN File Detonation__ playbook.
- __Detonate URL - Generic__
Added the __ANYRUN URL Detonation__ playbook.
- __Email Address Enrichment - Generic__
Adjusted version.
- __GenericPolling__
Added support for CSV arguments and values for _PollingCommandArgName_.
- __Process Email - Generic__
_SetIncident_ now retrieves data from the correct context fields.
---
### Incident Layouts
#### Improved Incident Layout
- __Phishing - Summary__
Updated phishing incident type layout.
---
### Classification & Mapping
#### New Classification & Mapping
- __OnboardingIntegration__
Mapping to phishing incidents. | 51.258929 | 389 | 0.79089 | eng_Latn | 0.967457 |
f47f60fc25bf7e29d73a19a784a6eaa8a40fd66e | 5,982 | md | Markdown | aspnetcore/fundamentals/middleware/request-response.md | gregzhadko/AspNetCore.Docs.ru-ru | c67dd33c6d22cce30b1e13298a3fbdb72ca0de64 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/fundamentals/middleware/request-response.md | gregzhadko/AspNetCore.Docs.ru-ru | c67dd33c6d22cce30b1e13298a3fbdb72ca0de64 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/fundamentals/middleware/request-response.md | gregzhadko/AspNetCore.Docs.ru-ru | c67dd33c6d22cce30b1e13298a3fbdb72ca0de64 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Операции запросов и ответов в ASP.NET Core
author: jkotalik
description: Узнайте, как читать текст запроса и писать текст ответа в ASP.NET Core.
monikerRange: '>= aspnetcore-3.0'
ms.author: jukotali
ms.custom: mvc
ms.date: 5/29/2019
no-loc:
- Blazor
- Blazor Server
- Blazor WebAssembly
- Identity
- Let's Encrypt
- Razor
- SignalR
uid: fundamentals/middleware/request-response
ms.openlocfilehash: b6fc7a115cb0f4696d10bf036eadb59028dfb605
ms.sourcegitcommit: d65a027e78bf0b83727f975235a18863e685d902
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 06/26/2020
ms.locfileid: "85404136"
---
# <a name="request-and-response-operations-in-aspnet-core"></a>Операции запросов и ответов в ASP.NET Core
Автор [Джастин Коталик (Justin Kotalik)](https://github.com/jkotalik)
В этой статье объясняется, как читать текст запроса и писать текст ответа. Возможно, вам потребуется написать код для этих операций при создании ПО промежуточного слоя. В других случаях писать такой код обычно не нужно, так как эти операции обрабатываются MVC и Razor Pages.
Существует две абстракции для текста запросов и ответов: <xref:System.IO.Stream> и <xref:System.IO.Pipelines.Pipe>. При чтении запроса <xref:Microsoft.AspNetCore.Http.HttpRequest.Body?displayProperty=nameWithType> — это <xref:System.IO.Stream>, а `HttpRequest.BodyReader` — это <xref:System.IO.Pipelines.PipeReader>. При записи ответа <xref:Microsoft.AspNetCore.Http.HttpResponse.Body?displayProperty=nameWithType> — это <xref:System.IO.Stream>, а `HttpResponse.BodyWriter` — это <xref:System.IO.Pipelines.PipeWriter>.
Рекомендуется использовать [конвейеры](/dotnet/standard/io/pipelines), а не потоки. Потоки удобнее использовать для некоторых простых операций, но производительность конвейеров выше и с ними проще работать в большинстве сценариев. Начиная с ASP.NET Core преимущество отдается внутреннему использованию конвейеров вместо потоков. Примеры:
* `FormReader`
* `TextReader`
* `TextWriter`
* `HttpResponse.WriteAsync`
Потоки не удаляются из платформы. С ними продолжают работать в .NET, и многие типы потоков не имеют эквивалентного конвейера, например `FileStreams` и `ResponseCompression`.
## <a name="stream-examples"></a>Примеры потоков
Предположим, необходимо создать ПО промежуточного слоя, которое считывает весь текст запроса как список строк с разделением на новые строки. Реализация простого потока может выглядеть следующим образом:
> [!WARNING]
> В приведенном ниже коде
> * используется для демонстрации проблем без использования канала для чтения текста запроса;
> * не предназначен для использования в рабочих приложениях.
[!code-csharp[](request-response/samples/3.x/RequestResponseSample/Startup.cs?name=GetListOfStringsFromStream)]
[!INCLUDE[about the series](~/includes/code-comments-loc.md)]
Этот код работает, но есть определенные проблемы:
* Перед добавлением `StringBuilder` в коде создается другая строка (`encodedString`), которая сразу отбрасывается. Этот процесс выполняется для всех байтов в потоке, и в результате выделяется дополнительный объем памяти для всего текста запроса.
* Код считывает всю строку перед разделением на новые строки. Более эффективным вариантом является поиск новых строк в массиве байтов.
Ниже приведен пример, в котором устранены некоторые предыдущие проблемы.
> [!WARNING]
> В приведенном ниже коде
> * используется для демонстрации решений некоторых проблем в приведенном выше коде без решения всех проблем;
> * не предназначен для использования в рабочих приложениях.
[!code-csharp[](request-response/samples/3.x/RequestResponseSample/Startup.cs?name=GetListOfStringsFromStreamMoreEfficient)]
В предшествующем примере:
* Не создается буфер для всего текста запроса в `StringBuilder`, если нет символов новой строки.
* В строке не вызывается `Split`.
Но некоторые проблемы при этом остаются:
* Если символы новой строки разрежены, большая часть текста запроса буферизуется в строке.
* Строки (`remainingString`) по-прежнему создаются в коде и добавляются в буфер строки, что приводит к выделению дополнительной памяти.
Эти проблемы можно решить, но ценой усложнения кода при незначительных его улучшениях. Конвейеры позволяют решить эти проблемы при минимальной сложности кода.
## <a name="pipelines"></a>Конвейеры
В следующем примере показано, как тот же сценарий можно обработать с помощью [PipeReader](/dotnet/standard/io/pipelines#pipe).
[!code-csharp[](request-response/samples/3.x/RequestResponseSample/Startup.cs?name=GetListOfStringFromPipe)]
В этом примере устранены многие проблемы, характерные для реализации посредством потоков.
* В буфере строки теперь нет необходимости, так как `PipeReader` обрабатывает байты, которые не использовались.
* Кодированные строки напрямую добавляются в список возвращенных строк.
* Создание строк не связано с выделением памяти, кроме памяти, используемой строкой (за исключением вызова `ToArray`).
## <a name="adapters"></a>Адаптеры
Для `HttpRequest` и `HttpResponse` доступны свойства `Body`, `BodyReader` и `BodyWriter`. Если назначить `Body` другому потоку, новый набор адаптеров автоматически адаптирует каждый тип к другому. Если назначить `HttpRequest.Body` новому потоку, `HttpRequest.BodyReader` автоматически назначается новому `PipeReader`, который создает оболочку для `HttpRequest.Body`.
## <a name="startasync"></a>StartAsync
Метод `HttpResponse.StartAsync` используется для указания того, что заголовки являются неизменяемыми, а также для запуска обратных вызовов `OnStarting`. При использования Kestrel в качестве сервера вызов `StartAsync` перед применением `PipeReader` гарантирует, что память, возвращенная `GetMemory`, относится к внутреннему методу <xref:System.IO.Pipelines.Pipe> Kestrel, а не к внешнему буферу.
## <a name="additional-resources"></a>Дополнительные ресурсы
* [System.IO.Pipelines в .NET](/dotnet/standard/io/pipelines)
* <xref:fundamentals/middleware/write>
<!-- Test with Postman or other tool. See image in static directory. -->
| 55.906542 | 518 | 0.803243 | rus_Cyrl | 0.922421 |
f47fe4911a137be79c1f91bccd5f98153b5dfe8b | 6,778 | md | Markdown | articles/azure-monitor/learn/dotnetcore-quick-start.md | jptarqu/azure-docs | de3e738917e10c63531e1c7fd713ffa0a47d08b9 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-10-08T20:29:07.000Z | 2021-12-14T12:46:56.000Z | articles/azure-monitor/learn/dotnetcore-quick-start.md | jptarqu/azure-docs | de3e738917e10c63531e1c7fd713ffa0a47d08b9 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-16T22:09:49.000Z | 2021-03-16T22:09:49.000Z | articles/azure-monitor/learn/dotnetcore-quick-start.md | jptarqu/azure-docs | de3e738917e10c63531e1c7fd713ffa0a47d08b9 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-03T06:03:16.000Z | 2021-03-03T06:03:16.000Z | ---
title: Quickstart ASP.NET Core - Azure Monitor Application Insights
description: Provides instructions to quickly set up an ASP.NET Core Web App for monitoring with Azure Monitor Application Insights
ms.subservice: application-insights
ms.topic: quickstart
author: mrbullwinkle
ms.author: mbullwin
ms.date: 06/26/2019
ms.custom: mvc
---
# Start Monitoring Your ASP.NET Core Web Application
With Azure Application Insights, you can easily monitor your web application for availability, performance, and usage. You can also quickly identify and diagnose errors in your application without waiting for a user to report them.
This quickstart guides you through adding the Application Insights SDK to an existing ASP.NET Core web application. To learn about configuring Application Insights without Visual Studio checkout this [article](../app/asp-net-core.md).
## Prerequisites
To complete this quickstart:
- [Install Visual Studio 2019](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019) with the following workloads:
- ASP.NET and web development
- Azure development
- [Install .NET Core 2.0 SDK](https://dotnet.microsoft.com/download)
- You will need an Azure subscription and an existing .NET Core web application.
If you don't have an ASP.NET Core web application, you can use our step-by-step guide to [create an ASP.NET Core app and add Application Insights.](../app/asp-net-core.md)
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
## Sign in to the Azure portal
Sign in to the [Azure portal](https://portal.azure.com/).
## Enable Application Insights
Application Insights can gather telemetry data from any internet-connected application, regardless of whether it's running on-premises or in the cloud. Use the following steps to start viewing this data.
1. Select **Create a resource** > **Developer tools** > **Application Insights**.
> [!NOTE]
>If this is your first time creating an Application Insights resource you can learn more by visiting the [Create an Application Insights Resource](../app/create-new-resource.md) doc.
A configuration box appears; use the following table to fill out the input fields.
| Settings | Value | Description |
| ------------- |:-------------|:-----|
| **Name** | Globally Unique Value | Name that identifies the app you are monitoring |
| **Resource Group** | myResourceGroup | Name for the new resource group to host App Insights data. You can create a new resource group or use an existing one. |
| **Location** | East US | Choose a location near you, or near where your app is hosted |
2. Click **Create**.
## Configure App Insights SDK
1. Open your ASP.NET Core Web App **project** in Visual Studio > Right-click on the AppName in the **Solution Explorer** > Select **Add** > **Application Insights Telemetry**.
![Add Application Insights Telemetry](./media/dotnetcore-quick-start/2vsaddappinsights.png)
2. Click the **Get Started** button
3. Select your account and subscription > Select the **Existing resource** you created in the Azure portal > Click **Register**.
4. Select **Project** > **Manage NuGet Packages** > **Package source: nuget.org** > **Update** the Application Insights SDK packages to the latest stable release.
5. Select **Debug** > **Start without Debugging** (Ctrl+F5) to Launch your app
![Application Insights Overview Menu](./media/dotnetcore-quick-start/3debug.png)
> [!NOTE]
> It takes 3-5 minutes before data begins appearing in the portal. If this app is a low-traffic test app, keep in mind that most metrics are only captured when there are active requests or operations.
## Start monitoring in the Azure portal
1. Reopen the Application Insights **Overview** page in the Azure portal by selecting **Home** and under recent resources select the resource you created earlier, to view details about your currently running application.
![Application Insights Overview Menu](./media/dotnetcore-quick-start/4overview.png)
2. Click **Application map** for a visual layout of the dependency relationships between your application components. Each component shows KPIs such as load, performance, failures, and alerts.
![Application Map](./media/dotnetcore-quick-start/5appmap.png)
3. Click on the **App Analytics** icon ![Application Map icon](./media/dotnetcore-quick-start/006.png) **View in Analytics**. This opens **Application Insights Analytics**, which provides a rich query language for analyzing all data collected by Application Insights. In this case, a query is generated for you that renders the request count as a chart. You can write your own queries to analyze other data.
![Analytics graph of user requests over a period of time](./media/dotnetcore-quick-start/6analytics.png)
4. Return to the **Overview** page and examine the KPI Dashboards. This dashboard provides statistics about your application health, including the number of incoming requests, the duration of those requests, and any failures that occur.
![Health Overview timeline graphs](./media/dotnetcore-quick-start/7kpidashboards.png)
5. On the left click on **Metrics**. Use the metrics explorer to investigate the health and utilization of your resource. You can click **Add new chart** to create additional custom views or select **Edit** to modify the existing chart types, height, color palette, groupings, and metrics. For example, you can make a chart that displays the average browser page load time by picking "Browser page load time" from the metrics drop down and "Avg" from aggregation. To learn more about Azure Metrics Explorer visit [Getting started with Azure Metrics Explorer](../platform/metrics-getting-started.md).
![Metrics tab: Average browser page load time chart](./media/dotnetcore-quick-start/8metrics.png)
## Clean up resources
When you are done testing, you can delete the resource group and all related resources. To do so follow the steps below.
> [!NOTE]
> If you used an existing resource group the instructions below will not work and you will need to just delete the individual Application Insights resource. Keep in mind anytime you delete a resource group all underlying resources that are members of that group will be deleted.
1. From the left-hand menu in the Azure portal, click **Resource groups** and then click **myResourceGroup**.
2. On your resource group page, click **Delete**, type **myResourceGroup** in the text box, and then click **Delete**.
## Next steps
> [!div class="nextstepaction"]
> [Find and diagnose run-time exceptions](./tutorial-runtime-exceptions.md)
| 59.982301 | 599 | 0.758483 | eng_Latn | 0.975183 |
f47ffa4711cf96813d65fefe8c825c0c0ad4759d | 17,177 | md | Markdown | packages/forms/docs/02-getting-started.md | chiiya/filament | b827ada3ab8bf78780763c9565edeac90c6369a6 | [
"MIT"
] | 1 | 2022-01-09T15:14:31.000Z | 2022-01-09T15:14:31.000Z | packages/forms/docs/02-getting-started.md | chiiya/filament | b827ada3ab8bf78780763c9565edeac90c6369a6 | [
"MIT"
] | null | null | null | packages/forms/docs/02-getting-started.md | chiiya/filament | b827ada3ab8bf78780763c9565edeac90c6369a6 | [
"MIT"
] | null | null | null | ---
title: Getting Started
---
<iframe width="560" height="315" src="https://www.youtube.com/embed/Kab21689E5A" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Preparing your Livewire component
Implement the `HasForms` interface and use the `InteractsWithForms` trait:
```php
<?php
namespace App\Http\Livewire;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms // [tl! focus]
{
use Forms\Concerns\InteractsWithForms; // [tl! focus]
public function render(): View
{
return view('edit-post');
}
}
```
In your Livewire component's view, render the form:
```blade
<form wire:submit.prevent="submit">
{{ $this->form }}
<button type="submit">
Submit
</button>
</form>
```
Finally, add any [fields](fields) and [layout components](layout) to the Livewire component's `getFormSchema()` method:
```php
<?php
namespace App\Http\Livewire;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public Post $post;
public $title;
public $content;
public function mount(): void
{
$this->form->fill([
'title' => $this->post->title,
'content' => $this->post->content,
]);
}
protected function getFormSchema(): array // [tl! focus:start]
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
// ...
];
} // [tl! focus:end]
public function submit(): void
{
// ...
}
public function render(): View
{
return view('edit-post');
}
}
```
Visit your Livewire component in the browser, and you should see the form components from `getFormSchema()`.
<img src="https://user-images.githubusercontent.com/41773797/147614478-5b40c645-107e-40ac-ba41-f0feb99dd480.png">
## Filling forms with data
Often, you will need to prefill your form fields with data. In normal Livewire components, this is often done in the `mount()` method, as this is only run once, immediately after the component is instantiated.
For your fields to hold data, they should have a corresponding property on your Livewire component, just as in Livewire normally.
To fill a form with data, call the `fill()` method on your form, and pass an array of data to fill it with:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public Post $post;
public $title;
public $content;
public function mount(): void // [tl! focus:start]
{
$this->form->fill([
'title' => $this->post->title,
'content' => $this->post->content,
]);
} // [tl! focus:end]
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
];
}
public function render(): View
{
return view('edit-post');
}
}
```
### Default data
Fields can use a `default()` configuration method, which allows you to specify default values to fill your form with.
To fill a form with default values, call the `fill()` method on your form without any parameters:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class CreatePost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public $title = '';
public $content = '';
public function mount(): void // [tl! focus:start]
{
$this->form->fill();
} // [tl! focus:end]
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')
->default('Status Update') // [tl! focus]
->required(),
Forms\Components\MarkdownEditor::make('content'),
];
}
public function render(): View
{
return view('create-post');
}
}
```
> You may customize what happens after fields are hydrated [using the `afterStateHydrated()` method](advanced#hydration).
## Getting data from forms
To get all form data in an array, call the `getState()` method on your form.
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class CreatePost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public $title = '';
public $content = '';
public function mount(): void
{
$this->form->fill();
}
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
];
}
public function create(): void // [tl! focus:start]
{
Post::create($this->form->getState());
} // [tl! focus:end]
public function render(): View
{
return view('create-post');
}
}
```
When `getState()` is run:
1) [Validation](validation) rules are checked, and if errors are present, the form is not submitted.
2) Any pending file uploads are stored permanently in the filesystem.
3) [Field relationships](#field-relationships), if they are defined, are saved.
> You may transform the value that is dehydrated from a field [using the `dehydrateStateUsing()` method](advanced#dehydration).
## Registering a model
You may register a model to a form. The form builder is able to use this model to unlock DX features, such as:
- Automatically retrieving the database table name when using database validation rules like `exists` and `unique`.
- Automatically attaching relationships to the model when the form is saved, when using fields such as the `BelongsToManyMultiSelect`, `SpatieMediaLibraryFileUpload`, or `SpatieTagsInput`.
Pass a model instance to a form using the `getFormModel()` method:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public Post $post;
public $title;
public $content;
public $tags;
public function mount(): void
{
$this->form->fill([
'title' => $this->post->title,
'content' => $this->post->content,
]);
}
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
Forms\Components\SpatieTagsInput::make('tags'),
];
}
protected function getFormModel(): Post // [tl! focus:start]
{
return $this->post;
} // [tl! focus:end]
public function render(): View
{
return view('edit-post');
}
}
```
Alternatively, you may pass the model instance to the field that requires it directly, using the `model()` method:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public Post $post;
public $title;
public $content;
public $tags;
public function mount(): void
{
$this->form->fill([
'title' => $this->post->title,
'content' => $this->post->content,
]);
}
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
Forms\Components\SpatieTagsInput::make('tags')->model($this->post), // [tl! focus]
];
}
public function render(): View
{
return view('edit-post');
}
}
```
You may now use [field relationships](#field-relationships);
### Registering a model class
In some cases, the model instance is not available until the form has been submitted. For example, in a form that creates a post, the post model instance cannot be passed to the form before it has been submitted.
You may receive some of the same benefits of registering a model by registering its class instead:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class CreatePost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public $title = '';
public $content = '';
public $categories = [];
public function mount(): void
{
$this->form->fill();
}
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
Forms\Components\BelongsToManyMultiSelect::make('categories')->relationship('categories', 'name'),
];
}
protected function getFormModel(): string // [tl! focus:start]
{
return Post::class;
} // [tl! focus:end]
public function render(): View
{
return view('create-post');
}
}
```
You may now use [field relationships](#field-relationships).
## Field relationships
Some fields, such as the `BelongsToSelect`, `BelongsToManyMultiSelect`, `SpatieMediaLibraryFileUpload`, or `SpatieTagsInput` are able to interact with model relationships.
For example, `BelongsToManyMultiSelect` is a multi-select field that can be used to attach records to a `BelongstoMany` relationship. When [registering a model](#registering-a-model) to the form or component, these relationships will be automatically saved to the pivot table [when `getState()` is called](#getting-data-from-forms):
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public Post $post;
public $title;
public $content;
public $categories;
public function mount(): void
{
$this->form->fill([
'title' => $this->post->title,
'content' => $this->post->content,
]);
}
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
Forms\Components\BelongsToManyMultiSelect::make('categories')->relationship('categories', 'name'),
];
}
protected function getFormModel(): Post // [tl! focus:start]
{
return $this->post;
}
public function save(): void
{
$this->post->update(
$this->form->getState(),
);
} // [tl! focus:end]
public function render(): View
{
return view('edit-post');
}
}
```
### Saving field relationships manually
In some cases, the model instance is not available until the form has been submitted. For example, in a form that creates a post, the post model instance cannot be passed to the form before it has been submitted. In this case, you will [pass the model class](#registering-a-model-class) instead, but any field relationships will need to be saved manually after.
In this situation, you may call the `model()` and `saveRelationships()` methods on the form after the instance has been created:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Illuminate\Database\Eloquent\Model;
use Livewire\Component;
class CreatePost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public $title = '';
public $content = '';
public $tags = [];
public function mount(): void
{
$this->form->fill();
}
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
Forms\Components\SpatieTagsInput::make('tags'),
];
}
protected function getFormModel(): string
{
return Post::class;
}
public function create(): void
{
$post = Post::create($this->form->getState());
$this->form->model($post)->saveRelationships(); // [tl! focus]
}
public function render(): View
{
return view('create-post');
}
}
```
## Using multiple forms
By default, the `InteractsWithForms` trait only handles one form per Livewire component. To change this, you can override the `getForms()` method to return more than one form, each with a unique name:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Author;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public Author $author;
public Post $post;
public $title;
public $content;
public $name;
public $email;
public function mount(): void
{
$this->postForm->fill([
'title' => $this->post->title,
'content' => $this->post->content,
]);
$this->authorForm->fill([
'name' => $this->author->name,
'email' => $this->author->email,
]);
}
protected function getPostFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
];
}
protected function getAuthorFormSchema(): array
{
return [
Forms\Components\TextInput::make('name')->required(),
Forms\Components\TextInput::make('email')->email()->required(),
];
}
public function savePost(): void
{
$this->post->update(
$this->postForm->getState(),
);
}
public function saveAuthor(): void
{
$this->author->update(
$this->authorForm->getState(),
);
}
protected function getForms(): array // [tl! focus:start]
{
return [
'postForm' => $this->makeForm()
->schema($this->getPostFormSchema())
->model($this->post),
'authorForm' => $this->makeForm()
->schema($this->getAuthorFormSchema())
->model($this->author),
];
} // [tl! focus:end]
public function render(): View
{
return view('edit-post');
}
}
```
## Scoping form data to an array property
You may scope the entire form data to a single array property on your Livewire component. This will allow you to avoid having to define a new property for each field:
```php
<?php
namespace App\Http\Livewire;
use App\Models\Post;
use Filament\Forms;
use Illuminate\Contracts\View\View;
use Livewire\Component;
class EditPost extends Component implements Forms\Contracts\HasForms
{
use Forms\Concerns\InteractsWithForms;
public Post $post;
public $data; // [tl! focus]
public function mount(): void
{
$this->form->fill([
'title' => $this->post->title,
'content' => $this->post->content,
]);
}
protected function getFormSchema(): array
{
return [
Forms\Components\TextInput::make('title')->required(),
Forms\Components\MarkdownEditor::make('content'),
Forms\Components\SpatieTagsInput::make('tags'),
];
}
protected function getFormModel(): Post
{
return $this->post;
}
protected function getFormStatePath(): string // [tl! focus:start]
{
return 'data';
} // [tl! focus:end]
public function render(): View
{
return view('edit-post');
}
}
```
In this example, all data from your form will be stored in the `$data` array.
| 25.599106 | 361 | 0.633289 | eng_Latn | 0.809155 |
f48091394744a5ec328b4b4d845975cc064dbcb4 | 182 | md | Markdown | README.md | datauy/IonicMapsBaseApp | 8f9d6512363d97cfd91d1f235bdb9f9e9fbf5cef | [
"MIT"
] | null | null | null | README.md | datauy/IonicMapsBaseApp | 8f9d6512363d97cfd91d1f235bdb9f9e9fbf5cef | [
"MIT"
] | null | null | null | README.md | datauy/IonicMapsBaseApp | 8f9d6512363d97cfd91d1f235bdb9f9e9fbf5cef | [
"MIT"
] | null | null | null | # IonicMapsBaseApp
App base utilizando Ionic y OpenStreetMaps.
Se puede extender para hacer otras apps que requieran georeferenciar lugares, objectos, etc.
http://ionicframework.com
| 36.4 | 92 | 0.82967 | spa_Latn | 0.966892 |
f4809a1fdd819add6db4754b59913fc3e8f57c67 | 2,012 | md | Markdown | CHANGELOG.md | bgaboragco/harvesterjs | 219fd9163fe9877da27fc898f6d94f9d9958f62e | [
"MIT"
] | null | null | null | CHANGELOG.md | bgaboragco/harvesterjs | 219fd9163fe9877da27fc898f6d94f9d9958f62e | [
"MIT"
] | null | null | null | CHANGELOG.md | bgaboragco/harvesterjs | 219fd9163fe9877da27fc898f6d94f9d9958f62e | [
"MIT"
] | null | null | null | ##4.0.0
### Connection string
Following changes to the [MongoDB connection string specification](https://github.com/mongodb/specifications/commit/4631ccd4f825fb1a3aba204510023f9b4d193a05), connection string related configuration must now be URL-encoded.
For example, whereas before `mongodb://u$ername:pa$$w{}rd@/tmp/mongodb-27017.sock/test` would have been a valid connection string (with username `u$ername`, password `pa$$w{}rd`, host `/tmp/mongodb-27017.sock` and auth database `test`), the connection string for those details would now have to be provided to harvesterJS as `mongodb://u%24ername:pa%24%24w%7B%7Drd@%2Ftmp%2Fmongodb-27017.sock/test`.
The easiest fix is to use the `mongodb-uri` module:
```javascript
const mongodbUri = require('mongodb-uri');
const harvester = require('harvesterjs');
// works with already encoded connection strings as well
const encodeMongoURI = (connectionString) => {
if (connectionString) {
return mongodbUri.format(mongodbUri.parse(connectionString));
}
return connectionString;
}
const options = {
adapter: 'mongodb',
connectionString: encodeMongoURI(connectionString),
oplogConnectionString: encodeMongoURI(oplogConnectionString)
}
const harvesterApp = harvester(options);
```
### Options flags
The Mongoose options structure changed in the new version [Option Changes in Mongoose v5.x](https://mongoosejs.com/docs/connections.html#v5-changes).
If the flags field is provided in the HarvesterJS options it has value has to follow the new Mongoose v5.x structure.
##### Old Mongoose v4.x format
```javascript
const options = {
adapter: 'mongodb',
flags: {
server: {
socketOptions: {
socketTimeoutMS: 360000,
},
reconnectTries: 30,
},
};
const harvesterApp = harvester(options);
```
##### New Mongoose v5.x format
```javascript
const options = {
adapter: 'mongodb',
flags: {
socketTimeoutMS: 360000,
reconnectTries: 30,
},
};
const harvesterApp = harvester(options);
```
##3.3.0
| 31.4375 | 399 | 0.733101 | eng_Latn | 0.870699 |
f480e65a21ead7654dfdaa67259d98ee198d394c | 26,590 | md | Markdown | CHANGELOG.md | gilest/ember-basic-dropdown | 44e8249b7c9b5707fded409d2c82d0624aeb9b79 | [
"MIT"
] | null | null | null | CHANGELOG.md | gilest/ember-basic-dropdown | 44e8249b7c9b5707fded409d2c82d0624aeb9b79 | [
"MIT"
] | null | null | null | CHANGELOG.md | gilest/ember-basic-dropdown | 44e8249b7c9b5707fded409d2c82d0624aeb9b79 | [
"MIT"
] | 1 | 2021-04-16T05:58:47.000Z | 2021-04-16T05:58:47.000Z | # 3.0.16
- Update ember and many other dependencies
# 3.0.15
- Fix conditional import for Embroider compatibility, using @embroider/macros.
# 3.0.14
- Import `htmlSafe` from `@ember/template` to fix deprecation warning.
- Migrate to github actions and fix stuff to make CI green again.
# 3.0.13
- Use `ember-style-modifier` in one more place.
- Migrate to github actions and fix CI on beta and canary. This was done by relaxing a dependency
on the embroider utils.
# 3.0.12
- Use `ember-style-modifier` for setting styles on element instead of using inline styles. This allows to
use the addon on webs that forbid inline styles on they CSP config.
# 3.0.11
- Relax dependency on ember-truth-helpers
# 3.0.10
- More IE11 fixes.
# 3.0.9
- Use `assign` polyfill for IE11 support.
# 3.0.8
- Update `ember-maybe-in-element` to 2.0.1, which fixes deprecation about `{{-in-element}}` usage.
# 3.0.7
- Add back `onXXX` event handlers to the trigger and content components, removed in the transition to angle bracket components,
because make some patterns easier as it allows to set handlers to those events when defining the contextual component, instead of
in invocation time
# 3.0.6
- [BUGFIX] Restore ability to change `horizontalPosition` and `verticalPosition` after initialization, lost
at some point in the transition to glimmer components.
# 3.0.5
- TS: Fix type definition: Allow touch events in `handleRootMouseDown` on another location.
# 3.0.4
- TS: Fix type definition: Allow touch events in `handleRootMouseDown`.
# 3.0.3
- A11y fix: When the component is not expanded, it must have `aria-expanded="false"`.
# 3.0.2
- Ensure `this._super` in invoked in the `included` hooks.
# 3.0.0-beta.8
- [BUGFIX] Ensure the `otherStyles` is initialized with a copied object.
# 3.0.0-beta.7
- [CHORE] Update to `@glimmer/component` 1.0.0
- [BUGFIX] If there's no enough space for the dropdown above nor below the trigger, put it below.
# 3.0.0-beta.6
- [CHORE] Improve exported types
# 3.0.0-beta.5
- [BUGFIX] Don't use typescript in /app folder
# 3.0.0-beta.4
- [CHORE] Convert to typescript
# 3.0.0-beta.3
- [BUGFIX] Fix reposition of dropdown after scroll or window resize events
# 3.0.0-beta.2
- [CHORE] Update to @glimmer/component 1.0.0-beta.2
# 3.0.0-beta.1
- No changes since alpha 1
# 3.0.0-alpha.1
- [MAYBE-BREAKING] Update to glimmer components
# 2.0.10
- [BUGFIX] Use `set` when changing `previousVerticalPosition` and `previousHorizontalPosition`.
# 2.0.9
- [CHORE] Update some dependencies. More importantly `ember-element-modifiers` to 1.0.2 which changes its behavior.
# 2.0.8
- [BUGFIX] Correct condition in which the development assertion added in 2.0.6 is thrown. Logic was reversed.
# 2.0.7
- [BUGFIX] Update `ember-element-helper` to 0.2.0 to fix bug in engines
# 2.0.6
- [ENHANCEMENT] Add development assertion to help people understand the somewhat cryptic error message that
appeared when there was no element with id `ember-basic-dropdown-wormhole` in the document
# 2.0.5
- [CHORE] Update npm packages to tests pass in beta and canary
- [BUGFIX] Ensure Ember doesn't complain about not using `set` to update values.
# 2.0.4
- [ENHANCEMENT] Allow to pass a `@defaultClass` argument to the content. This is necessary to be able to
assign classes while the `{{component}}` helper does not allow to pass attributes like in angle-bracket syntax.
# 2.0.3
- [ENHANCEMENT] Allow to pass a `@defaultClass` argument to the trigger. This is necessary to be able to
assign classes while the `{{component}}` helper does not allow to pass attributes like in angle-bracket syntax.
# 2.0.2
- [BUGFIX] Move `ember-truth-helpers` to `dependencies`.
# 2.0.1
- [ENHANCEMENT] Expose `ember-basic-dropdown/utils/calculate-position` as public API.
# 2.0.0
- [CHORE] Setup/teardown mutation observer using dedicated element modifiers.
- [CHORE] Refactor animation logic to use `{{did-insert}}`/`{{will-destroy}}` element modifiers.
# 2.0.0-beta.3
- [BREAKING] Remove `onMouseDown`,`onClick`,`onKeyDown` and `onTouchEnd` attributes from the `<dropdown.Trigger>` component.
Users can just use the `{{on}}` modifier to attach events now, although that means that to prevent the default event handler from being called for those events,
instead of `return false`, they must call `event.stopImmediatePropagation()` like they would with regular events.
# 2.0.0-beta.2
- [BREAKING] Remove `onMouseEnter`,`onMouseLeave`,`onFocus`,`onBlur`,`onFocusIn`,`onFocusOut` and `onKeyUp` from the `<dropdown.Trigger>` component.
Users can just use the `{{on}}` modifier to attach any event they want.
- [BREAKING] Remove `onFocusIn`, `onFocusOut`, `onMouseEnter`, `onMouseLeave` and `onKeyDown` from the `<dropdown.Content>` component. Users
can just use the `{{on}}` modifier to attach any event they want.
# 2.0.0-beta.1
- [CHORE] Now that Ember 3.11 is released, this can go to beta.
# 2.0.0-alpha.X
- Because of some limitation of splattributes in AngleBracket components, the addons requires Ember 3.11+
- Public API changed: Previously the contextual component of this addon were invoked with `{{#dd.trigger}}` and `{{#dd.content}}`.
Now they are expected to be invoked with `<dd.Trigger>` and `<dd.Content>`. Note that the names are capitalized. This is done
because the new convention that components start with a capital letter.
- Passing `@class`, `@defaultClass`, `aria-*` and most of those properties doesn't work anymore. This addon
now expects to be used with angle-bracket syntax. In angle bracket syntax there is a distinction between
component arguments (those preceded with an `@` sign) and html attributes, and the latter are a much
better way of passing any arbitrary attribute to any of the components and sub-components of this addon.
- The default `eventType` of the trigger changed from `"mousedown"` to `"click"`. That means that dropdown
used to open with the `mousedown` events will now open with `click` events, which is a saner default.
Since most of the time those events are fired together most people won't notice any difference, but
in testing if someone was explicitly firing mouseodown events tests would have to be updated.
- The default `rootEventType` has changed from `"mousedown"` to `"click"`. That means that before dropdowns would close
as soon as the user mousedowns outside the dropdown, but not it will listed to click events by default.
It's unlikely this change will be noticed by anyone.
# 1.1.2
- [ENHANCEMENT] Allow to bind the type attribute of the trigger.
# 1.1.1
- [ENHANCEMENT] Allow to customize the root event the component listens to in order to close when you
click outside it. It has historically been `mousedown`, but now it can be `click`.
This allows to scroll the page using a scrollbar.
In a future version `click` will become the default.
# 1.1.0
- [REAPPLY] Revert the revert in 1.0.6. Technically identical to 1.0.5
# 1.0.6
- [REVERT] Revert change in 1.0.5 as it is a breaking change for Ember Power Select. Will fix EPS and apply again.
# 1.0.5
- [BUGFIX] A11y improvement: The trigger doesn't have an `aria-owns` attribute until the dropdown
is open, because it's illegal for an aria-owns to reference an element (the content) that it's not
in the page (yet).
# 1.0.4
- [BUGFIX] Fix code to find the destination element in fastboot.
# 1.0.3
- [FEATURE] Allow `onKeyUp` action to added to the trigger.
# 1.0.2
- [FEATURE] Allow `onKeyDown` action to the content component
# 1.0.1
- [DOCS] Document `eventType` option that has existed for a while now.
- [FEATURE] Add `stopPropagation` option to the trigger to prevent the propagation of the event
# 1.0.0
- NO NEW CHANGES, just version 1.0. It was about time.
# 1.0.0-beta.8
- [CLEANUP] Remove another reduntad `self.` prefix.
# 1.0.0-beta.7
- [CLEANUP] Remove unnecessary `self.` prefix to access a few globals like `document` or `window`.
# 1.0.0-beta.6
- [CLEANUP] Remove passing `to="id-of-destination"` that has been deprecated for a long time.
# 1.0.0-beta.5
- [BUGFIX] Fix event being fired on destroyed trigger in Ember 3.2 and beyond.
# 1.0.0-beta.4
- [FEATURE] Allow dropdowns with a custom `calculatePosition` function to return in the `styles` object
css properties other than `top`, `left`, `right`, `height` and `width`. Now users can set any arbitrary
properties. P.e. `max-height`, `z-index`, `transform`....
# 1.0.0-beta.3
- [INTERNAL] Stop depending internally on `ember-native-dom-helpers`. Now the utilities in `ember-test-helpers`
have been ported to `@ember/test-helpers`, so they are not needed anymore.
# 1.0.0-beta.0
- [DEPRECATION] Deprecate global acceptance helpers `clickDropdown` and `tapDropdown`. Suggest to
explicitly import `clickTrigger`/`tapTrigger` or even better, just use `click`/`tap` from `@ember/test-helpers`.
- [BREAKING] Drop ember-wormhole addon, use `#-in-element` built-in instead. Less size, more performance.
- [BREAKING] Drop support for Ember <= 2.9. This addon will require Ember 2.10 or greater to work.
# 0.34.0
- [BREAKING] Delete the `/test-support` folder, use `addon-test-support` instead. That means people
should import helpers from `ember-basic-dropdown/test-support/helpers` instead of using relative paths
like `../../helpers/ember-basic-dropdown`, as they are brittle and change with nesting.
- [BUGFIX] Ensure that the dropdown is not open by the right button of the mouse.
# 0.33.10
- [ENHANCEMENT] Allow `horizontalPosition` to work when `renderInPlace=true`
# 0.33.9
- [ENHANCEMENT] Allow to bind the role attribute of the trigger
- [ENHANCEMENT] Added `preventScroll=true` option to the `dropdown.content` component to "frezze" all
mousewheel-triggered scrolling happening outside the component. Scrolling on touch devices using
touchmove can still occur.
# 0.33.7
- [ENHANCEMENT] Enable `renderInPlace=true` dropdowns to be dynamically repositioned. (#350)
- [BUGFIX] Ensure the inline `style` attribute does not output `undefinedpx` when some style is
undefined. Also, ensure that both `left` and `right` cannot be applied simultaneous, as it doesn't
make sense.
# 0.33.6
- [BUGFIX] Prevent dropdowns with `renderInPlace=true` from being incorrectly opened twice. This had
no evident effects, but lead to the events in the `dd.Content` component to be added twice.
- [BREAKING] Remove support for IE9/IE10.
# 0.33.5
- [ENHANCEMENT] Allow `transitioningInClass` and `transitioningOutClass` to be several classes by
passing a string with spaces in it.
# 0.33.4
- [BUGFIX] Allow to use `horizontalPosition="center"` along with `renderInPlace=true`.
# 0.33.3
- [BUGFIX] When the component is rendered in-place, it still has to have the `--below` class
needed. While not used for positioning, it was used for animations.
# 0.33.2
- [BUGFIX] Fix positioning problem when the body has position relative.
# 0.33.1
- [BUGFIX] Move test helpers to `/addon-test-support` and use `/test-support` only for reexporting
so apps don't require babel 6.6.0+ to work.
# 0.33.0
- [INTERNAL] Stop relying in `Ember.testing` to decide the wormhole destination. Not just uses
the environment.
- [ENHANCEMENT] This component by default opens with `mousedown` events, not with `click`. This
behavior has its reasons but it's also surprising for some users and it might not be adequate
sometimes. Now the trigger accepts an `eventType="mousedown" | "click"` (defaults to `"mousedown"` as today). This doesn't affect touch devices, as those are opened/closed
with the `touchend` event.
- [INTERNAL] Use the new import paths (e.g: `import Component from '@ember/component`)
# 0.32.9
- [ENHANCEMENT] Add `auto-right` option for horizontal position. It's like `auto`, but it defaults to
being anchored to the right unless there is not enough room towards the left for the content.
# 0.32.8
- [ENHANCEMENT] Update `ember-native-dom-helpers` to `^0.5.0`.
# 0.32.7
- [BUGFIX] Guard agains edge case where positioning could fail if the select close and opened extremely fast
# 0.32.6
- [BUGFIX] Add global events to the `document` instead of the `body`, so the dropdown works
even if the body is smaller than the height of the window
# 0.32.5
- [ENHANCEMENT] Update `ember-native-dom-helpers` to 0.4.0
# 0.32.4
- [BUGFIX] Fix broken render in place after refactor for allowing dropdowns in scrollable
elements.
# 0.32.3
- [ENHANCEMENTE] Allow to nest dropdowns infinitely. Thanks to @alexander-alvarez
# 0.32.2
- [BUGFIX] Stop looking for scrollable ancestors on the BODY or HTML
# 0.32.1
- [BUGFIX] Allow to work if the container has no offsetParent. This happens if the body
is relative.
# 0.32.0
- No changes in previous beta
# 0.32.0-beta.0
- [ENHANCEMENT] Dropdowns inside elements with their own scroll are finally supported! It works
even inside elements with scroll inside another element with scroll inside pages with scroll.
- [BREAKING/BUGFIX] Closes #233. The positioning logic now accounts for the position of the
parent of container if it has position relative or absolute. The breaking part is that
custom `calculatePosition` functions now take the destination element as third arguments,
and the object with the options has now been moved to thr forth position.
- [BREAKING] Passing `to="id-of-elmnt"` to the `{{#dropdown.content}}` component is deprecated.
The way to specify a different destination for `ember-wormhole` is now by passing `destination=id-of-elmnt`
to the top-level `{{#basic-dropdown}}` component. The old method works but shows a deprecation message.
# 0.31.5
- [INTERNAL] Update ember-concurrency to a version that uses babel 6.
# 0.31.4
- [BUGFIX] Fix calculation of the screen's width when browser shows a scrollbar, which
affected dropdowns with `horizontalPosition="right"`.
- [BUGFIX] Fix initial CSS positioning flickering caused by refactor that removed jQuery.
# 0.31.2
- Now the addon is 100% jQuery-free. Docs page doesn't use jQuery either. Tests in CI run
without jquery so if anyone inadvertenly relies on it, tests will fail.
- Rewrite tests to use `async/await` with the latest `ember-native-dom-helpers`
# 0.31.1
- Update to ember-native-dom-helpers 0.3.4, which contains a new & simpler import path.
# 0.31.0
- [INTERNAL/BREAKING???] Update to Babel 6. This **shouldn't** be breaking, but you
never know.
# 0.30.3
- [BUFGIX] Fix unnecesary line break caused by the wormhole empty div. Solved
by making that div be `display: inline`.
# 0.30.1
- [ENHANCEMENT] Allow the `calculatePosition` function to also determine the height of the
dropdown's content.
# 0.30.0
- [BREAKING] Unify `calculatePosition` and `calculateInPlacePosition`. Now the function receives
an `renderInPlace` flag among its options, and based on that it uses a different logic.
This reduces the public API surface. This new function is the default export of `/utils/calculate-position`.
The individual functions used to reposition when rendered in the wormhole or in-place are
available as named exports: `calculateWormholedPosition` and `calculateInPlacePosition`
# 0.24.5
- [INTERNAL] Use `ember-native-dom-helpers`.
# 0.24.4
- [ENHANCEMENT] The trigger component now has a bound style property.
# 0.24.3
- [ENHANCEMENT] Rely on the new `ember-native-dom-helpers` to fire native events instead
or rolling out my own solution.
# 0.24.2
- [BUGFIX] Fix synthetic click on the trigger when happens in SVG items.
# 0.24.1
- [ENHANCEMENT] Bind `aria-autocomplete` and `aria-activedescendant`.
- [BUGFIX] Check if the component is destroyed after calling the `onClose` action, as it
might have been removed.
# 0.24.0
- [BREAKING] It is a problem for a11y to have `aria-owns/controls` to an element that it's not
in the DOM, so now there is a stable div with the right ID that gets moved to the root
of the body when the component opens.
- [BUGFIX] Fix `clickDropdown` test helper when the given selector is already the selector
of the trigger.
# 0.23.0
- [BREAKING] Don't display `aria-pressed` when the component is opened. The attribute is not
present by default, but can be bound from the outside.
- [BREAKING] Don't display `aria-haspopup` by default, but display it if passed in a truthy value.
- [BREAKING] Use `aria-owns` instead of `aria-controls` to link trigger and content together.
# 0.22.3
- [FEATURE] The `dropdown.content` now accepts a `defaultClass` property as a secondary way
of adding a class in contextual components that doesn't pollute the `class` property.
# 0.22.2
- [FEATURE] `clickTrigger` test helper also works when the given selector is the one of
the trigger (before it had to be an ancestor of the trigger).
# 0.22.1
- [FEATURE] It accepts an `onInit` action passed from the outside. Private-ish for now.
# 0.22.0
- [FEATURE/BREAKING] Allow to customize the ID of the trigger component. Now the dropdown
uses a new `data-ebd-id` attribute for query the trigger reliably. Unlikely to be
breaking tho.
# 0.21.0
- [FEATURE] Add LESS support, on pair with the SASS one.
- [FEATURE] Added `$ember-basic-dropdown-overlay-pointer-events` SASS variable.
# 0.20.0
- [BREAKING CHANGE] Renamed `onKeydown` event to `onKeyDown` to be consistent with the naming
of every other action in the component
# 0.19.4
- [FEATURE] Allow to pass `onMouseDown` and `onTouchEnd` options actions to subscribe to those
events. If the handler for those events returns `false`, the default behaviour (toggle the component)
is prevented.
# 0.19.3
- [FEATURE] Allow to pass `onMouseEnter` and `onMouseLeave` actions to the content, like
we allow with the trigger.
# 0.19.2
- [BUGFIX] Prevent the `touchend` that opens the trigger to trigger a click on the dropdown's content
when this appears over the trigger. Copied from hammertime.
# 0.19.1
- [CLEANUP] Update to `ember-cli-sass` ^6.0.0 and remove `node-sass` from dependencies.
# 0.19.0
- [BUGFIX] Call `registerAPI` will `null` on `willDestroy` to avoid memory leaks.
# 0.18.1
- [ENHANCEMENT] Pass the dropdown itself as an option to `calculatePosition` and `calculateInPlacePosition`,
so users have pretty much total freedom on that function.
# 0.18.0
- [ENHANCEMENT] Allow downdowns rendered in place to be positioned above the trigger,
and also to customize how they are positioned.
# 0.17.4
- [ENHANCEMENT] Update to ember-wormhole 0.5.1, which maximises Glimmer2 compatibility
# 0.17.3
- [BUGFIX] The fix in 0.17.2 that removed `e.preventDefault()` cause both `touchend` and a synthetic `mousedown`
events to be fired, which basically made the component to be opened and immediatly closed in touch devises.
# 0.17.2
- [BUGFIX] Remove `e.preventDefault()` that caused inputs inside the trigger to not be focusable in touch screens
# 0.17.1
- [BUGFIX] The positioning strategy takes into account the horizontal scroll and it's generally smarter.
- [BUGFIX] Fixed bug when a dropdown with `horizontalPosition="auto"` passed from left to right, it keeped
both properties, modifiying its width implicitly.
# 0.17.0
- [BREAKING] The object returned by `calculatePosition` now contains the offsets
of the dropdown as numbers instead of strings with "px" as unit. This makes easier for people
to modify those values in their own functions.
# 0.16.4
- [ENHANCEMENT] The default `calculatePosition` method is now inside `addon/utils/calculate-position`, so users can import it
to perhaps reuse some of the logic in their own positioning functions.
# 0.16.3
- [BUGFIX] Add forgotten `uniqueId` property to the publicAPI yielded to the block. The `publicAPI` object passes to actions had it
but the one in the block didn't.
# 0.16.2
- [ENHANCEMENT] Allows to customize how the dropdown is positioned by passing a `calculatePosition` function.
# 0.16.1
- [BUGFIX] Remove automatic transition detection. It never worked properly. It's fundamentally flawed. CSS animations are OK tho.
# 0.16.0
- [TESTING] Ensure the addon is tested in 2.4LTS
- [BUGFIX] Fix bug in versions of ember <= 2.6
# 0.16.0-beta.6
- [BUGFIX] Remove `ember-get-config` entirely. It turns that there is a less hacky way of doing this.
# 0.16.0-beta.5
- [BUGFIX] Update `ember-get-config` to fix Fastboot compatibility.
# 0.16.0-beta.4
- [BUGFIX] Guard agains a possible action invocation after the component has been destroyed.
# 0.16.0-beta.3
- [BUGFIX] Fix broken `horizontalPosition="center"`.
# 0.16.0-beta.2
- [BUGFIX] Bind `title` attribute in the trigger.
# 0.16.0-beta.1
- [BUGFIX] Revert glimmer2 compatibility
# 0.16.0-beta.0
- [BUGFIX] Compatibility with glimmer2.
# 0.15.0-beta.6
- [BUGFIX] Fix bug detaching event in IE10.
# 0.15.0-beta.5
- [BUGFIX] The correct behaviour when a dropdown is disabled or the tabindex is `false` should be
to not have `tabindex` attribute, not to have a `tabindex` of -1.
# 0.15.0-beta.4
- [BUGFIX] Don't import `guidFor` from the shims.
# 0.15.0-beta.3
- [BUGFIX] Preventing the default behaviour from an event doesn't prevent the component from doing
the usual thing.
# 0.15.0-beta.2
- [BUGFIX] Fix edge case that made the component leak memory if the component is removed after a
mousedown but before the mouseup events of the trigger. This situation probably almost impossible
outside testing.
# 0.15.0-beta.1
- [ENHANCEMENT] The dropdown can have an overlay element if it receives `overlay=true`.
# 0.15.0-beta.0
- [BREAKING] `dropdown.uniqueId` is not a string like `ember1234` instead of the number `1234`.
# 0.14.0-beta.1
- [BUGFIX] Consider the scope of the select the entire body, even if the app is rendered inside an
specific element.
# 0.14.0-beta.0
- [BREAKING] Rename the `dropdown._id` to `dropdown.uniqueId` and promote it to public API.
# 0.13.0-beta.6
- [BUGFIX] Make the first reposition faster by applying the styles directly instead of using bindings.
This allows the dropdown to have components with autofocus inside without messing with the scroll.
# 0.13.0-beta.5
- [BUGFIX] Enabling the component after it has been disabled should trigger the `registerAPI` action.
# 0.13.0-beta.4
- [BUGFIX] Fix bug when the consumer app has a version of `ember-cli-shims` older than 0.1.3
# 0.13.0-beta.3
- [BUGFIX] Stop importing `getOwner` from the shim, since many people doesn't have shims up to date
and it's trolling them.
# 0.13.0-beta.2
- [BUGFIX] Render more than one component with `renderInPlace=true` cases an exception and after that
mayhem happens.
# 0.13.0-beta.1
- [BUGFIX] Use `requestAnimationFrame` to wait one frame before checking if the component is being
animated. This makes this component fully compatible with Glimmer 2.
# 0.13.0-beta.0
- [ENHANCEMENT/BREAKING] Now the publicAPI object received sub-components and passed to actions is
immutable. That means that each change in the internal state of this component will generate a new
object. This allows userts to get rid of `Ember.Observe` to use some advanced patterns and makes
possible some advanced time-travel debugging.
# 0.12.0-beta.26
- [ENHANCEMENT] Allow to customize the classes used for animations with `transitioningInClass`,
`transitionedInClass` and `transitioningOutClass`.
- [BUGFIX] Property detect space on the right when `horizontalPosition="auto"` (the default) and
position the element anchored to the right of the dropdown if there is no enough space for it
to fit.
# 0.12.0-beta.23
- [BUGFIX] Correctly remove touchmove event on touch ends.
- [BUGFIX] Prevent DOM access in fastboot mode.
# 0.12.0-beta.22
- [ENHANCEMENT] If the component gets disabled while it's opened, it is closed automatically.
# 0.12.0-beta.21
- [ENHANCEMENT] Expose `clickDropdown` and `tapDropdown` acceptance helpers.
- [BUGFIX] Allow to nest a dropdown inside another dropdown without the second being rendered in place.
# 0.12.0-beta.20
- [BUGFIX] Apply enter animation after render. Otherwise it take place before the component
gains the `--above` or `--below` class, which is needed to know how to animate.
# 0.12.0-beta.19
- [BUGFIX] Allow `to` property of the content component to be undefined
# 0.12.0-beta.18
- [BUGFIX] Fix positioning of dropdowns rendered in-place due to a typo
# 0.12.0-beta.17
- [BUGFIX] Ensure the `disabled` property of the public API is updated properly
# 0.12.0-beta.13
- [BUGFIX] Ensure reposition is not applied in destroyed components
# 0.12.0-beta.12
- [BUGFIX] Fix animations
# 0.12.0-beta.11
- [INTERNAL] Update ember-cli to 2.6
- [INTERNAL] Update ember-wormhole to 0.4.0 (fastboot support)
- [BUGFIX] Correct behaviour of `aria-disabled`, `aria-expanded`, `aria-invalid`, `aria-pressed`
and `aria-required` in Ember 2.7+
- [ENHANCEMENT] Allow to customize componets for the trigger and the content, with the `triggerComponent`
and `contentComponent` properties.
- [BUGFIX] Stop relying in `this.elementId`, since it is not present in Ember 2.7+ on tagless components.
- [ENHANCEMENT] Add an `onBlur` action to the trigger
- [BUGFIX] Change repositioning logic so it doesn't sets properties from inside the `didInsertElement`,
which is deprecated and causes a performance penalty.
# 0.12-beta.6
- [ENHANCEMENT] Although the component is tagless by default, if the user passes `renderInPlace=true`,
a wrapper element with class `.ember-basic-dropdown` is added to be able to position the content
properly.
- [BUGFIX] The reposition function no longer sets any observable state, so it can be called at
any time without worring about the runloop and double renders. More performant and fixes a bug
when something inside gains the focus faster than the reposition.
# 0.12-beta.5
- [BUGFIX] Don't focus the trigger again when the dropdown is closed as consecuence of clicking outside it.
- [ENHANCEMENT] Allow to add focusin/out events to the trigger
# 0.12-beta.4
- [BUGFIX] Ensure that if the trigger receives `tabindex=null` it still defaults to 0.
# 0.12-beta.3
- [BUGFIX] Ensure that the `aria-controls` attribute of the trigger points to the content by default.
# 0.12-beta.2
- [BUGFIX] Focus the trigger when the component is closed
- [BUGFIX] Allow to attach `onFocusIn` and `onFocusOut` event the the dropdown content component.
# 0.12-beta.1
- [BUGFIX] Around half a docen regressions and changes, including add the proper classes to trigger
and content when the component is rendered above/below/right/left/center/in-place. Now those cases
are different between trigger and content for better granularity.
# 0.12-beta.0
- [BREAKING CHANGE] Brand new API
| 42.27345 | 173 | 0.74607 | eng_Latn | 0.993714 |
f4815da98dec3fe36d18457eb6d08c463b875846 | 5,322 | markdown | Markdown | _posts/blog/2017-12-11-transy-language-spec.markdown | ifeolb/ifeolb3.github.io | d123db8fb404f8821f3fea198ba3460587a664c5 | [
"MIT"
] | 2 | 2018-04-17T05:27:15.000Z | 2019-02-11T22:25:25.000Z | _posts/blog/2017-12-11-transy-language-spec.markdown | ifeolb/ifeolb3.github.io | d123db8fb404f8821f3fea198ba3460587a664c5 | [
"MIT"
] | null | null | null | _posts/blog/2017-12-11-transy-language-spec.markdown | ifeolb/ifeolb3.github.io | d123db8fb404f8821f3fea198ba3460587a664c5 | [
"MIT"
] | 10 | 2018-07-05T19:56:14.000Z | 2020-07-05T22:28:33.000Z | ---
layout: post
title: Transy Language Specification
author: Brendan Thompson
date: 2017-12-11 7:00:00 -0400
permalink: /transy-language-spec
categories: blog
tags: C++
excerpt: The Transy Language defined for the Compiler Construction course during the fall of 2017 at Transylvania University
github: https://github.com/brenthompson2/Compiler-Constructin
image: /assets/img/transy-logo-bat.png
imageAlt: Transylvania Raf Logo
image-slider: /assets/img/project-images/slider-images/transy-logo-bat-slider.png
---
### The Transy Language
The language itself was only ever verbally described to the students in class. There were certain specifics that everybody needed to meet, but otherwise the implementations were all individual. It is a loosely typed, procedural language that it Touring Complete. The syntax for the language can most easily be compared to BASIC. It is fairly easy to learn and use, but be careful with those `goto` commands.
The type system:
- ID = variable or constant
- Literal = "string of words" or $literalVariable
- Array = An aggregate data type for IDs
See the project page for my [Transy Compiler & Executor](/transy-compiler-executor) for more information.
### The Commands
#### read
Takes in up to 7 variables to be read in to Core Memory
**example:**
read firstVar, secondVar, thirdVar
#### write
Takes in up to 7 variables from Core Memory to be written out to the console
**example:**
write firstVar, secondVar, thirdVar
#### stop
Tells the Executor to finish executing
**example:**
stop
#### dim
Allocates the space for up to 7 arrays at a time in Core Memory
**example:**
dim firstArray[10], secondArray[15], thirdArray[20]
#### aread
Reads in values to the array locations in Core Memory between the specified startIndex and endIndex
**example:**
aread firstArray, 3, 8
#### awrite
Writes the values to the screen from Core Memory in the array locations between the specified startIndex and endIndex
**example:**
awrite firstArray, 3, 8
#### goto
Starts executing at the line specified by the line label
**example:**
firstLineLabel:
goto firstLineLabel
#### loop
Loops while the runnerVariable has not yet crossed the endValue. The runnerVariable must be a variable, while the startIndex, endIndex, and incrementAmount can be variables or constants. It is a pre-test loop
The conditional is based off the incrementAmount.
**Conditions**
- Positive: `<=`
- Negative: `>=`
- Zero: `!=`
**example:**
loop runnerVariable, startIndex, endIndex, IncrementAmount
a = a + runnerVariable
loop-end
#### loop-end
Manipulates the runnderVariable by incrementAmount, based off the last loop encountered by the executor. Then it checks the conditional of that loop. If it succeeds, the program begins executing at the first line in the loop. If it fails, the program continues executing at the next line after loop-end
**example:**
loop runnerVariable, startIndex, endIndex, IncrementAmount
a = a + runnerVariable
loop-end
#### ifa
Jumps to one of the line labels specified based on whether the supplied value is negative, zero, or positive
**example:**
ifa (varToTest) negativeLabel, zeroLabel, positiveLabel
negativeLabel:
lwrite "The value was negative\n"
goto finishedLabel
zeroLabel:
lwrite "The value was zero\n"
goto finishedLabel
positiveLabel:
lwrite "The value was positive\n"
finishedLabel: stop
#### nop
Does not do anything
**example:**
nop
#### listo
Prints out the object code for the current program
**example:**
listo
#### lread
Reads in a $literalVariable to the Literal Table
**example:**
lread $literalVariable
#### lwrite
Writes the literal to the console. Can take in a $literalVariable or a "literal string". `\n` prints a new line
**example:**
lwrite "Please supply a literal"
lread $literalVariable
lwrite $literalVariable
#### if
Starts executing at the line location specified if the conditional evaluates to true. Otherwise, it continues executing at the next line.
**Operators:**
- `<`
- `<=`
- `=`
- `>`
- `>=`
- `!`
**example:**
if (firstVar < secondVar) then firstLineLabel
a = firstVar
goto finishLabel
firstLineLabel: a = secondVar
finishLabel: stop
#### cls
Clears the screen
**example:**
cls
#### cdump
Prints all of the values in Core Memory between the startIndex and the endIndex
**example:**
cdump 0, 10
#### subp
Executes the mathematical command represented by the operation and given the ID. It then sets the value to the supplied variable
**Operations:**
- `sin` = sets the variable to the sin of the ID
- `cos` = sets the variable to the cosin of the ID
- `exp` = sets the variable to the base-e exponential function of the ID
- `abs` = sets the variable to absolute value of the ID
- `alg` = sets the variable to the log base 2 of the ID
- `aln` = sets the variable to the natural log of the ID
- `log` = sets the variable to the log base 10 of the ID
- `sqr` = sets the variable to the square root of the ID
**example:**
subp sin(destinationVariable, sourceValue)
#### Assignment
All basic mathematical expressions can be handled.
**Operators:**
- `+`
- `-`
- `*`
- `/`
- `^`
- `()` = enforced order of operations
- `[]` = direct access into an array index
**example:**
a[2^5] = 0 + 2 / firstArray[index] * (-10 - b ^ 2)
| 22.646809 | 407 | 0.731116 | eng_Latn | 0.995394 |
f481dae6269c99fe2cb5c48b093495212e16e9b9 | 510 | md | Markdown | README.md | akoster15/budget-tracker | 6fab3b6e581a6f5a45d11f43797c0cbf2d8d718d | [
"MIT"
] | null | null | null | README.md | akoster15/budget-tracker | 6fab3b6e581a6f5a45d11f43797c0cbf2d8d718d | [
"MIT"
] | null | null | null | README.md | akoster15/budget-tracker | 6fab3b6e581a6f5a45d11f43797c0cbf2d8d718d | [
"MIT"
] | null | null | null | # Budget-Tracker
## Description
This app allows users to track their expenses both online and offline by adding or subtracting fund from their account balance.
![screenshot](https://user-images.githubusercontent.com/85514792/146469153-1e797ac9-54a3-48e4-81e8-b96521a77b22.PNG)
## Technologies Used
- Node
- Express
- Mongoose
- MongoDB
- Heroku
- IndexDB
- PWAs
- Service Workers
## Links
- GitHub: https://github.com/akoster15/budget-tracker
- Deployed: https://budget-tracker-akoster15.herokuapp.com/
| 21.25 | 127 | 0.772549 | eng_Latn | 0.555538 |
f4821fb3d3b4baad3025f6982273d6ddb02154c2 | 17,408 | md | Markdown | website/documentation/getting-started.md | stefanistrate/PhotoSwipe | 324ccc8d68d186c29fd5fbd84cc09b0fe020174a | [
"MIT"
] | null | null | null | website/documentation/getting-started.md | stefanistrate/PhotoSwipe | 324ccc8d68d186c29fd5fbd84cc09b0fe020174a | [
"MIT"
] | null | null | null | website/documentation/getting-started.md | stefanistrate/PhotoSwipe | 324ccc8d68d186c29fd5fbd84cc09b0fe020174a | [
"MIT"
] | null | null | null | ---
layout: default
title: "PhotoSwipe Documentation: Getting Started"
h1_title: Getting Started
description: PhotoSwipe image gallery getting started guide.
addjs: true
canonical_url: http://photoswipe.com/documentation/getting-started.html
buildtool: true
markdownpage: true
---
First things that you should know before you start:
- PhotoSwipe is not a simple jQuery plugin, at least basic JavaScript knowledge is required to install.
- PhotoSwipe requires predefined image dimensions ([more about this](faq.html#image-size)).
- If you use PhotoSwipe on non-responsive website – controls will be scaled on mobile (as the whole page is scaled). So you'll need to implement custom controls (e.g. single large close button in top right corner).
- All code in the documentation is pure Vanilla JS and supports IE 8 and above. If your website or app uses some JavaScript framework (like jQuery or MooTools) or you don't need to support old browsers – feel free to simplify the code.
- Avoid serving big images (larger than 2000x1500px) for mobile, as they will dramatically reduce animation performance and can cause crash (especially on iOS Safari). Possible solutions: [serve responsive images](responsive-images.html), or open image on a separate page, or use libraries that support image tiling (like [Leaflet](http://leafletjs.com/)) ([more in FAQ](faq.html#mobile-crash)).
## <a name="initialization"></a> Initialization
### <a name="init-include-files"></a>Step 1: include JS and CSS files
You can find them in [dist/](https://github.com/dimsemenov/PhotoSwipe/tree/master/dist) folder of [GitHub](https://github.com/dimsemenov/PhotoSwipe) repository. Sass and uncompiled JS files are in folder [src/](https://github.com/dimsemenov/PhotoSwipe/tree/master/src). I recommend using Sass if you're planning to modify existing styles, as code there is structured and commented.
```html
<!-- Core CSS file -->
<link rel="stylesheet" href="path/to/photoswipe.css">
<!-- Skin CSS file (styling of UI - buttons, caption, etc.)
In the folder of skin CSS file there are also:
- .png and .svg icons sprite,
- preloader.gif (for browsers that do not support CSS animations) -->
<link rel="stylesheet" href="path/to/default-skin/default-skin.css">
<!-- Core JS file -->
<script src="path/to/photoswipe.min.js"></script>
<!-- UI JS file -->
<script src="path/to/photoswipe-ui-default.min.js"></script>
```
It doesn't matter how and where will you include JS and CSS files. Code is executed only when you call `new PhotoSwipe()`. So feel free to defer loading of files if you don't need PhotoSwipe to be opened initially.
PhotoSwipe also supports AMD loaders (like RequireJS) and CommonJS, use them like so:
```javascript
require([
'path/to/photoswipe.js',
'path/to/photoswipe-ui-default.js'
], function( PhotoSwipe, PhotoSwipeUI_Default ) {
// var gallery = new PhotoSwipe( someElement, PhotoSwipeUI_Default ...
// gallery.init()
// ...
});
```
And also, you can install it via Bower (`bower install photoswipe`), or [NPM](https://www.npmjs.com/package/photoswipe) (`npm install photoswipe`).
### <a name="init-add-pswp-to-dom"></a>Step 2: add PhotoSwipe (.pswp) element to DOM
You can add HTML code dynamically via JS (directly before the initialization), or have it in the page initially (like it's done on the demo page). This code can be appended anywhere, but ideally before the closing `</body>`. You may reuse it across multiple galleries (as long as you use same UI class).
```html
<!-- Root element of PhotoSwipe. Must have class pswp. -->
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">
<!-- Background of PhotoSwipe.
It's a separate element as animating opacity is faster than rgba(). -->
<div class="pswp__bg"></div>
<!-- Slides wrapper with overflow:hidden. -->
<div class="pswp__scroll-wrap">
<!-- Container that holds slides.
PhotoSwipe keeps only 3 of them in the DOM to save memory.
Don't modify these 3 pswp__item elements, data is added later on. -->
<div class="pswp__container">
<div class="pswp__item"></div>
<div class="pswp__item"></div>
<div class="pswp__item"></div>
</div>
<!-- Default (PhotoSwipeUI_Default) interface on top of sliding area. Can be changed. -->
<div class="pswp__ui pswp__ui--hidden">
<div class="pswp__top-bar">
<!-- Controls are self-explanatory. Order can be changed. -->
<div class="pswp__counter"></div>
<button class="pswp__button pswp__button--close" title="Close (Esc)"></button>
<button class="pswp__button pswp__button--share" title="Share"></button>
<button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>
<button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>
<!-- Preloader demo http://codepen.io/dimsemenov/pen/yyBWoR -->
<!-- element will get class pswp__preloader--active when preloader is running -->
<div class="pswp__preloader">
<div class="pswp__preloader__icn">
<div class="pswp__preloader__cut">
<div class="pswp__preloader__donut"></div>
</div>
</div>
</div>
</div>
<div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
<div class="pswp__share-tooltip"></div>
</div>
<button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
</button>
<button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
</button>
<div class="pswp__caption">
<div class="pswp__caption__center"></div>
</div>
</div>
</div>
</div>
```
Order of `pswp__bg`, `pswp__scroll-wrap`, `pswp__container` and `pswp__item` elements should not be changed.
You might ask, why PhotoSwipe doesn't add this code automatically via JS, reason is simple – just to save file size, in case if you need some modification of layout.
### Step 3: initialize
Execute `PhotoSwipe` constructor. It accepts 4 arguments:
1. `.pswp` element from step 2 (it must be added to DOM).
2. PhotoSwipe UI class. If you included default `photoswipe-ui-default.js`, class will be `PhotoSwipeUI_Default`. Can be `false`.
3. Array with objects (slides).
4. [Options](options.html).
```javascript
var pswpElement = document.querySelectorAll('.pswp')[0];
// build items array
var items = [
{
src: 'https://placekitten.com/600/400',
w: 600,
h: 400
},
{
src: 'https://placekitten.com/1200/900',
w: 1200,
h: 900
}
];
// define options (if needed)
var options = {
// optionName: 'option value'
// for example:
index: 0 // start at first slide
};
// Initializes and opens PhotoSwipe
var gallery = new PhotoSwipe( pswpElement, PhotoSwipeUI_Default, items, options);
gallery.init();
```
At the end you should get something like this:
<div class="codepen-embed">
<p data-height="600" data-theme-id="10447" data-slug-hash="gbadPv" data-default-tab="result" data-user="dimsemenov" class='codepen'>
<a href="http://codepen.io/dimsemenov/pen/gbadPv/" target="_blank"><strong>View example on CodePen →</strong></a>
</p>
<!-- <script async src="//assets.codepen.io/assets/embed/ei.js"></script> -->
</div>
## <a name="creating-slide-objects-array"></a> Creating an Array of Slide Objects
Each object in the array should contain data about slide, it can be anything that you wish to display in PhotoSwipe - path to image, caption string, number of shares, comments, etc.
By default PhotoSwipe uses just 5 properties: `src` (path to image), `w` (image width), `h` (image height), `msrc` (path to small image placeholder, large image will be loaded on top), `html` (custom HTML, [more about it](custom-html-in-slides.html)).
During the navigation, PhotoSwipe adds its own properties to this object (like `minZoom` or `loaded`).
```javascript
var slides = [
// slide 1
{
src: 'path/to/image1.jpg', // path to image
w: 1024, // image width
h: 768, // image height
msrc: 'path/to/small-image.jpg', // small image placeholder,
// main (large) image loads on top of it,
// if you skip this parameter - grey rectangle will be displayed,
// try to define this property only when small image was loaded before
title: 'Image Caption' // used by Default PhotoSwipe UI
// if you skip it, there won't be any caption
// You may add more properties here and use them.
// For example, demo gallery uses "author" property, which is used in the caption.
// author: 'John Doe'
},
// slide 2
{
src: 'path/to/image2.jpg',
w: 600,
h: 600
// etc.
}
// etc.
];
```
You may dynamically define slide object properties directly before PhotoSwipe reads them, use `gettingData` event (more info in [API section of docs](api.html)). For example, this technique can be used to [serve different images](responsive-images.html) for different screen sizes.
## <a class="anchor" name="dom-to-slide-objects"></a> How to build an array of slides from a list of links
Let's assume that you have a list of links/thumbnails that look like this ([more info about markup of gallery](seo.html)):
```html
<div class="my-gallery" itemscope itemtype="http://schema.org/ImageGallery">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<a href="large-image.jpg" itemprop="contentUrl" data-size="600x400">
<img src="small-image.jpg" itemprop="thumbnail" alt="Image description" />
</a>
<figcaption itemprop="caption description">Image caption</figcaption>
</figure>
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<a href="large-image.jpg" itemprop="contentUrl" data-size="600x400">
<img src="small-image.jpg" itemprop="thumbnail" alt="Image description" />
</a>
<figcaption itemprop="caption description">Image caption</figcaption>
</figure>
</div>
```
... and you want click on the thumbnail to open PhotoSwipe with large image (like it's done on a demo page). All you need to do is:
1. Bind click event to links/thumbnails.
2. After user clicked on thumbnail, find its index.
3. Create an array of slide objects from DOM elements – loop through all links and retrieve `href` attribute (large image url), `data-size` attribute (its size), `src` of thumbnail, and contents of caption.
PhotoSwipe doesn't really care how will you do this. If you use frameworks like jQuery or MooTools, or if you don't need to support IE8, code can be simplified dramatically.
Here is pure Vanilla JS implementation with IE8 support:
```javascript
var initPhotoSwipeFromDOM = function(gallerySelector) {
// parse slide data (url, title, size ...) from DOM elements
// (children of gallerySelector)
var parseThumbnailElements = function(el) {
var thumbElements = el.childNodes,
numNodes = thumbElements.length,
items = [],
figureEl,
linkEl,
size,
item;
for(var i = 0; i < numNodes; i++) {
figureEl = thumbElements[i]; // <figure> element
// include only element nodes
if(figureEl.nodeType !== 1) {
continue;
}
linkEl = figureEl.children[0]; // <a> element
size = linkEl.getAttribute('data-size').split('x');
// create slide object
item = {
src: linkEl.getAttribute('href'),
w: parseInt(size[0], 10),
h: parseInt(size[1], 10)
};
if(figureEl.children.length > 1) {
// <figcaption> content
item.title = figureEl.children[1].innerHTML;
}
if(linkEl.children.length > 0) {
// <img> thumbnail element, retrieving thumbnail url
item.msrc = linkEl.children[0].getAttribute('src');
}
item.el = figureEl; // save link to element for getThumbBoundsFn
items.push(item);
}
return items;
};
// find nearest parent element
var closest = function closest(el, fn) {
return el && ( fn(el) ? el : closest(el.parentNode, fn) );
};
// triggers when user clicks on thumbnail
var onThumbnailsClick = function(e) {
e = e || window.event;
e.preventDefault ? e.preventDefault() : e.returnValue = false;
var eTarget = e.target || e.srcElement;
// find root element of slide
var clickedListItem = closest(eTarget, function(el) {
return (el.tagName && el.tagName.toUpperCase() === 'FIGURE');
});
if(!clickedListItem) {
return;
}
// find index of clicked item by looping through all child nodes
// alternatively, you may define index via data- attribute
var clickedGallery = clickedListItem.parentNode,
childNodes = clickedListItem.parentNode.childNodes,
numChildNodes = childNodes.length,
nodeIndex = 0,
index;
for (var i = 0; i < numChildNodes; i++) {
if(childNodes[i].nodeType !== 1) {
continue;
}
if(childNodes[i] === clickedListItem) {
index = nodeIndex;
break;
}
nodeIndex++;
}
if(index >= 0) {
// open PhotoSwipe if valid index found
openPhotoSwipe( index, clickedGallery );
}
return false;
};
// parse picture index from URL (#pid)
var photoswipeParseHash = function() {
var hash = window.location.hash.substring(1);
var params = {};
if(hash.length < 1) {
return params;
}
params['pid'] = hash;
return params;
};
var openPhotoSwipe = function(index, galleryElement, disableAnimation, fromURL) {
var pswpElement = document.querySelectorAll('.pswp')[0],
gallery,
options,
items;
items = parseThumbnailElements(galleryElement);
// define options (if needed)
options = {
getThumbBoundsFn: function(index) {
// See Options -> getThumbBoundsFn section of documentation for more info
var thumbnail = items[index].el.getElementsByTagName('img')[0], // find thumbnail
pageYScroll = window.pageYOffset || document.documentElement.scrollTop,
rect = thumbnail.getBoundingClientRect();
return {x:rect.left, y:rect.top + pageYScroll, w:rect.width};
}
};
// PhotoSwipe opened from URL
if(fromURL) {
if(options.galleryPIDs) {
// parse real index when custom PIDs are used
// http://photoswipe.com/documentation/faq.html#custom-pid-in-url
for(var j = 0; j < items.length; j++) {
if(items[j].pid == index) {
options.index = j;
break;
}
}
} else {
// in URL indexes start from 1
let idx = parseInt(index, 10) - 1;
if (idx >= 0 && idx < items.length) {
options.index = idx;
}
}
} else {
let idx = parseInt(index, 10);
if (idx >= 0 && idx < items.length) {
options.index = idx;
}
}
// exit if index not found
if( isNaN(options.index) ) {
return;
}
if(disableAnimation) {
options.showAnimationDuration = 0;
}
// Pass data to PhotoSwipe and initialize it
gallery = new PhotoSwipe( pswpElement, PhotoSwipeUI_Default, items, options);
gallery.init();
};
// loop through all gallery elements and bind events
var galleryElements = document.querySelectorAll( gallerySelector );
for(var i = 0, l = galleryElements.length; i < l; i++) {
galleryElements[i].onclick = onThumbnailsClick;
}
// Parse URL and open gallery if it contains #pid
var hashData = photoswipeParseHash();
if(hashData.pid) {
openPhotoSwipe( hashData.pid , galleryElements[0], true, true );
}
};
// execute above function
initPhotoSwipeFromDOM('.my-gallery');
```
Example on CodePen (`focus` & `history` options are disabled due to embed issues):
<div class="codepen-embed">
<p data-height="600" data-theme-id="10447" data-slug-hash="ZYbPJM" data-default-tab="result" data-user="dimsemenov" class='codepen'>
<a href="http://codepen.io/dimsemenov/pen/ZYbPJM/" target="_blank"><strong>View example on CodePen →</strong></a>
</p>
</div>
Tip: you may download example from CodePen to play with it locally (`Edit on CodePen` -> `Share` -> `Export .zip`).
- If you're using markup that differs from this example, you'll need to edit function `parseThumbnailElements`.
- If you're not experienced in pure JavaScript and don't know how to parse DOM, refer to [QuirksMode](http://quirksmode.org/dom/core/#gettingelements) and [documentation on MDN](https://developer.mozilla.org/en-US/docs/Web/API/Element.getElementsByTagName).
- Note that IE8 does not support HTML5 `<figure>` and `<figcaption>` elements, so you need to include [html5shiv](https://github.com/aFarkas/html5shiv) in `<head>` section ([cdnjs hosted version](http://cdnjs.com/libraries/html5shiv/) is used in example):
```html
<!--[if lt IE 9]>
<script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.2/html5shiv.min.js"></script>
<![endif]-->
```
## About
Please [keep script updated](faq.html#keep-updated), report bugs through [GitHub](https://github.com/dimsemenov/PhotoSwipe), suggest features on [UserVoice](https://photoswipe.uservoice.com/forums/275302-feature-requests-ideas) and ask questions through [StackOverflow](http://stackoverflow.com/questions/ask?tags=javascript,photoswipe).
Know how this page can be improved? Found a typo? [Suggest an edit!](https://github.com/dimsemenov/PhotoSwipe/blob/master/website/documentation/getting-started.md)
<iframe src="http://ghbtns.com/github-btn.html?user=dimsemenov&repo=photoswipe&type=watch&count=true&size=large" allowtransparency="true" frameborder="0" scrolling="0" width="155" height="30" style=""></iframe>
| 33.221374 | 395 | 0.693934 | eng_Latn | 0.806881 |
f482e4404e69efae5d721946212643e2822b79ad | 4,771 | md | Markdown | reference/7.1/Microsoft.PowerShell.Core/About/about_Locations.md | TBrasse/PowerShell-Docs | 2b72cc4f9000724a2eb83d3f6f90806004ad349e | [
"CC-BY-4.0",
"MIT"
] | 892 | 2019-01-24T05:33:27.000Z | 2022-03-31T02:39:46.000Z | reference/7.1/Microsoft.PowerShell.Core/About/about_Locations.md | TBrasse/PowerShell-Docs | 2b72cc4f9000724a2eb83d3f6f90806004ad349e | [
"CC-BY-4.0",
"MIT"
] | 5,152 | 2019-01-24T01:36:00.000Z | 2022-03-31T21:57:54.000Z | reference/7.1/Microsoft.PowerShell.Core/About/about_Locations.md | TBrasse/PowerShell-Docs | 2b72cc4f9000724a2eb83d3f6f90806004ad349e | [
"CC-BY-4.0",
"MIT"
] | 894 | 2019-01-25T03:04:48.000Z | 2022-03-31T15:12:56.000Z | ---
description: Describes how to access items from the working location in PowerShell.
Locale: en-US
ms.date: 03/15/2021
online version: https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_locations?view=powershell-7.1&WT.mc_id=ps-gethelp
schema: 2.0.0
title: about Locations
---
# about_Locations
## Short description
Describes how to access items from the working location in PowerShell.
## Long description
The current working location is the default location to which commands point.
In other words, this is the location that PowerShell uses if you do not supply
an explicit path to the item or location that is affected by the command.
> [!NOTE]
> PowerShell supports multiple runspaces per process. Each runspace has its own
> _current directory_. This is not the same as the current directory of the
> process: `[System.Environment]::CurrentDirectory`.
In most cases, the current working location is a drive accessed through the
PowerShell FileSystem provider and, in some cases, a directory on that drive.
For example, you might set your current working location to the following
location:
```powershell
C:\Program Files\Windows PowerShell
```
As a result, all commands are processed from this location unless another path
is explicitly provided.
PowerShell maintains the current working location for each drive even when the
drive is not the current drive. This allows you to access items from the
current working location by referring only to the drive of another location.
For example, suppose that your current working location is `C:\Windows`. Now,
suppose you use the following command to change your current working location
to the HKLM: drive:
```powershell
Set-Location HKLM:
```
Although your current location is now the registry drive, you can still access
items in the `C:\Windows` directory simply by using the C: drive, as shown in
the following example:
```powershell
Get-ChildItem C:
```
PowerShell remembers that your current working location for that drive is the
Windows directory, so it retrieves items from that directory. The results
would be the same if you ran the following command:
```powershell
Get-ChildItem C:\Windows
```
In PowerShell, you can use the Get-Location command to determine the current
working location, and you can use the Set-Location command to set the current
working location. For example, the following command sets the current working
location to the Windows directory of the C: drive:
```powershell
Set-Location c:\windows
```
After you set the current working location, you can still access items from
other drives simply by including the drive name (followed by a colon) in the
command, as shown in the following example:
```powershell
Get-ChildItem HKLM:\software
```
The example command retrieves a list of items in the Software container of the
HKEY Local Machine hive in the registry.
PowerShell also allows you to use special characters to represent the current
working location and its parent location. To represent the current working
location, use a single period. To represent the parent of the current working
location, use two periods. For example, the following specifies the System
subdirectory in the current working location:
```powershell
Get-ChildItem .\system
```
If the current working location is `C:\Windows`, this command returns a list of
all the items in `C:\Windows\System`. However, if you use two periods, the
parent directory of the current working directory is used, as shown in the
following example:
```powershell
Get-ChildItem ..\"program files"
```
In this case, PowerShell treats the two periods as the C: drive, so the
command retrieves all the items in the `C:\Program Files` directory.
A path beginning with a slash identifies a path from the root of the current
drive. For example, if your current working location is
`C:\Program Files\PowerShell`, the root of your drive is C. Therefore, the
following command lists all items in the `C:\Windows` directory:
```powershell
Get-ChildItem \windows
```
If you do not specify a path beginning with a drive name, slash, or period
when supplying the name of a container or item, the container or item is
assumed to be located in the current working location. For example, if your
current working location is `C:\Windows`, the following command returns all the
items in the `C:\Windows\System` directory:
```powershell
Get-ChildItem system
```
If you specify a file name rather than a directory name, PowerShell returns
details about that file (assuming that file is located in the current working
location).
## See also
[Set-Location](xref:Microsoft.PowerShell.Management.Set-Location)
[about_Providers](about_Providers.md)
[about_Path_Syntax](about_Path_Syntax.md)
| 35.080882 | 148 | 0.788304 | eng_Latn | 0.999544 |
f4841538ec691f043b0ca8d7f8952bbe5abf4ec9 | 6,960 | md | Markdown | node_modules/express-session/README.md | vibhatha/digitalocean_7 | 2c14e6b543e76f65a84c9ca91dbec8725f6e7e22 | [
"MIT"
] | null | null | null | node_modules/express-session/README.md | vibhatha/digitalocean_7 | 2c14e6b543e76f65a84c9ca91dbec8725f6e7e22 | [
"MIT"
] | null | null | null | node_modules/express-session/README.md | vibhatha/digitalocean_7 | 2c14e6b543e76f65a84c9ca91dbec8725f6e7e22 | [
"MIT"
] | null | null | null | # express-session
[![NPM Version](https://badge.fury.io/js/express-session.svg)](https://badge.fury.io/js/express-session)
[![Build Status](https://travis-ci.org/expressjs/session.svg?branch=master)](https://travis-ci.org/expressjs/session)
[![Coverage Status](https://img.shields.io/coveralls/expressjs/session.svg?branch=master)](https://coveralls.io/r/expressjs/session)
THIS REPOSITORY NEEDS A MAINTAINER.
If you are interested in maintaining this module, please start contributing by making PRs and solving / discussing unsolved issues.
## API
```js
var express = require('express')
var session = require('express-session')
var app = express()
app.use(session({secret: 'keyboard cat'}))
```
### session(options)
Setup session store with the given `options`.
Session data is _not_ saved in the cookie itself, just the session ID.
#### Options
- `name` - cookie name (formerly known as `key`). (default: `'connect.sid'`)
- `store` - session store instance.
- `secret` - session cookie is signed with this secret to prevent tampering.
- `cookie` - session cookie settings.
- (default: `{ path: '/', httpOnly: true, secure: false, maxAge: null }`)
- `genid` - function to call to generate a new session ID. (default: uses `uid2` library)
- `rolling` - forces a cookie set on every response. This resets the expiration date. (default: `false`)
- `resave` - forces session to be saved even when unmodified. (default: `true`)
- `proxy` - trust the reverse proxy when setting secure cookies (via "x-forwarded-proto" header). When set to `true`, the "x-forwarded-proto" header will be used. When set to `false`, all headers are ignored. When left unset, will use the "trust proxy" setting from express. (default: `undefined`)
- `saveUninitialized` - forces a session that is "uninitialized" to be saved to the store. A session is uninitialized when it is new but not modified. This is useful for implementing login sessions, reducing server storage usage, or complying with laws that require permission before setting a cookie. (default: `true`)
- `unset` - controls result of unsetting `req.session` (through `delete`, setting to `null`, etc.). This can be "keep" to keep the session in the store but ignore modifications or "destroy" to destroy the stored session. (default: `'keep'`)
#### options.genid
Generate a custom session ID for new sessions. Provide a function that returns a string that will be used as a session ID. The function is given `req` as the first argument if you want to use some value attached to `req` when generating the ID.
**NOTE** be careful you generate unique IDs so your sessions do not conflict.
```js
app.use(session({
genid: function(req) {
return genuuid(); // use UUIDs for session IDs
},
secret: 'keyboard cat'
}))
```
#### Cookie options
Please note that `secure: true` is a **recommended** option. However, it requires an https-enabled website, i.e., HTTPS is necessary for secure cookies.
If `secure` is set, and you access your site over HTTP, the cookie will not be set. If you have your node.js behind a proxy and are using `secure: true`, you need to set "trust proxy" in express:
```js
var app = express()
app.set('trust proxy', 1) // trust first proxy
app.use(session({
secret: 'keyboard cat'
, cookie: { secure: true }
}))
```
For using secure cookies in production, but allowing for testing in development, the following is an example of enabling this setup based on `NODE_ENV` in express:
```js
var app = express()
var sess = {
secret: 'keyboard cat'
cookie: {}
}
if (app.get('env') === 'production') {
app.set('trust proxy', 1) // trust first proxy
sess.cookie.secure = true // serve secure cookies
}
app.use(session(sess))
```
By default `cookie.maxAge` is `null`, meaning no "expires" parameter is set
so the cookie becomes a browser-session cookie. When the user closes the
browser the cookie (and session) will be removed.
### req.session
To store or access session data, simply use the request property `req.session`,
which is (generally) serialized as JSON by the store, so nested objects
are typically fine. For example below is a user-specific view counter:
```js
app.use(session({ secret: 'keyboard cat', cookie: { maxAge: 60000 }}))
app.use(function(req, res, next) {
var sess = req.session
if (sess.views) {
sess.views++
res.setHeader('Content-Type', 'text/html')
res.write('<p>views: ' + sess.views + '</p>')
res.write('<p>expires in: ' + (sess.cookie.maxAge / 1000) + 's</p>')
res.end()
} else {
sess.views = 1
res.end('welcome to the session demo. refresh!')
}
})
```
#### Session.regenerate()
To regenerate the session simply invoke the method, once complete
a new SID and `Session` instance will be initialized at `req.session`.
```js
req.session.regenerate(function(err) {
// will have a new session here
})
```
#### Session.destroy()
Destroys the session, removing `req.session`, will be re-generated next request.
```js
req.session.destroy(function(err) {
// cannot access session here
})
```
#### Session.reload()
Reloads the session data.
```js
req.session.reload(function(err) {
// session updated
})
```
#### Session.save()
```js
req.session.save(function(err) {
// session saved
})
```
#### Session.touch()
Updates the `.maxAge` property. Typically this is
not necessary to call, as the session middleware does this for you.
### req.session.cookie
Each session has a unique cookie object accompany it. This allows
you to alter the session cookie per visitor. For example we can
set `req.session.cookie.expires` to `false` to enable the cookie
to remain for only the duration of the user-agent.
#### Cookie.maxAge
Alternatively `req.session.cookie.maxAge` will return the time
remaining in milliseconds, which we may also re-assign a new value
to adjust the `.expires` property appropriately. The following
are essentially equivalent
```js
var hour = 3600000
req.session.cookie.expires = new Date(Date.now() + hour)
req.session.cookie.maxAge = hour
```
For example when `maxAge` is set to `60000` (one minute), and 30 seconds
has elapsed it will return `30000` until the current request has completed,
at which time `req.session.touch()` is called to reset `req.session.maxAge`
to its original value.
```js
req.session.cookie.maxAge // => 30000
```
## Session Store Implementation
Every session store _must_ implement the following methods
- `.get(sid, callback)`
- `.set(sid, session, callback)`
- `.destroy(sid, callback)`
Recommended methods include, but are not limited to:
- `.length(callback)`
- `.clear(callback)`
For an example implementation view the [connect-redis](http://github.com/visionmedia/connect-redis) repo.
| 34.285714 | 322 | 0.693822 | eng_Latn | 0.974045 |
f4841bef52823aa1b2d7b4c21653b30b290668e0 | 368 | md | Markdown | README.md | morelli690/_hack_cpp_driver_r6s-external-nuklear | 1a8f327557b537a45abf1db0c3d4c868e6351739 | [
"MIT"
] | 1 | 2019-10-17T14:13:36.000Z | 2019-10-17T14:13:36.000Z | README.md | morelli690/_hack_cpp_driver_r6s-external-nuklear | 1a8f327557b537a45abf1db0c3d4c868e6351739 | [
"MIT"
] | null | null | null | README.md | morelli690/_hack_cpp_driver_r6s-external-nuklear | 1a8f327557b537a45abf1db0c3d4c868e6351739 | [
"MIT"
] | null | null | null | # r6s-external-nuklear
Simple copy paste ready base for external cheats using sockets for km-um communication, intended to be manual mapped using drvmap or kdmapper.
Driver is made for winver: 1903.
![1337 cheats](https://i.gyazo.com/e18b2c306c94472e0953036825dcdf1a.png)
Use it for whatever you want. Feel free to add me on discord if help is needed (アレックス#0481).
| 40.888889 | 142 | 0.788043 | eng_Latn | 0.976127 |
f48437233bb02d922a3486ba6c1f83d6d48accea | 4,925 | md | Markdown | articles/service-health/service-health-alert-webhook-opsgenie.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-health/service-health-alert-webhook-opsgenie.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-health/service-health-alert-webhook-opsgenie.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure szolgáltatásbeli állapottal kapcsolatos riasztások küldése a OpsGenie webhookok használatával
description: Személyre szabott értesítések beszerzése a OpsGenie-példány szolgáltatás állapotával kapcsolatos eseményekről.
ms.topic: conceptual
ms.date: 06/10/2019
ms.openlocfilehash: def12d5e7b1b93b8370cd7be61538fca53531ae1
ms.sourcegitcommit: 747a20b40b12755faa0a69f0c373bd79349f39e3
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 02/27/2020
ms.locfileid: "77654137"
---
# <a name="send-azure-service-health-alerts-with-opsgenie-using-webhooks"></a>Azure szolgáltatásbeli állapottal kapcsolatos riasztások küldése a OpsGenie webhookok használatával
Ebből a cikkből megtudhatja, hogyan állíthatja be az Azure szolgáltatás állapotával kapcsolatos riasztásokat a OpsGenie webhook használatával. A [OpsGenie](https://www.opsgenie.com/)Azure Service Health-integrációjának használatával Azure Service Health riasztásokat továbbíthat a OpsGenie. A OpsGenie meg tudja határozni a megfelelő személyeket a hívási ütemezés alapján, e-mailben, SMS-ben, telefonhívásokkal, iOS & Android leküldéses értesítésekkel és a riasztások kiterjesztésével, amíg a riasztást nem igazolják vagy bezárták.
## <a name="creating-a-service-health-integration-url-in-opsgenie"></a>Szolgáltatás állapot-integrációs URL-cím létrehozása a OpsGenie-ben
1. Győződjön meg arról, hogy regisztrált, és be van jelentkezve a [OpsGenie](https://www.opsgenie.com/) -fiókjába.
1. Navigáljon a OpsGenie **integrációk** szakaszához.
![A OpsGenie integrációs szakasza](./media/webhook-alerts/opsgenie-integrations-section.png)
1. Válassza a **Azure Service Health** Integration (integráció) gombot.
![A "Azure Service Health gomb" a OpsGenie](./media/webhook-alerts/opsgenie-azureservicehealth-button.png)
1. **Nevezze** el a riasztást, és adja meg a **hozzárendelt csoport** mezőt.
1. Töltse ki a többi mezőt, például a **címzetteket**, az **engedélyezést**és az **értesítések mellőzését**.
1. Másolja ki és mentse el az **integrációs URL-címet**, amelynek már tartalmaznia kell a `apiKey` a végéhez hozzáfűzve.
![Az "integrációs URL" a OpsGenie](./media/webhook-alerts/opsgenie-integration-url.png)
1. **Integrációs mentés** kiválasztása
## <a name="create-an-alert-using-opsgenie-in-the-azure-portal"></a>Riasztás létrehozása a Azure Portal OpsGenie használatával
### <a name="for-a-new-action-group"></a>Új műveleti csoport esetén:
1. Kövesse az 1 – 8. lépést a [riasztás létrehozása a szolgáltatás állapotáról szóló értesítésben egy új műveleti csoportra vonatkozóan a Azure Portal használatával](../azure-monitor/platform/alerts-activity-log-service-notifications.md).
1. Definiálás a **műveletek**listájában:
a. **Művelet típusa:** *webhook*
b. **Részletek:** A korábban mentett OpsGenie **-integrációs URL-cím** .
c. **Név:** Webhook neve, aliasa vagy azonosítója.
1. A riasztás létrehozásához válassza a **Mentés** lehetőséget.
### <a name="for-an-existing-action-group"></a>Meglévő műveleti csoport esetén:
1. A [Azure Portal](https://portal.azure.com/)válassza a **figyelő**elemet.
1. A **Beállítások** szakaszban válassza a **műveleti csoportok**lehetőséget.
1. Keresse meg és válassza ki a szerkeszteni kívánt műveleti csoportot.
1. Hozzáadás a **műveletek**listájához:
a. **Művelet típusa:** *webhook*
b. **Részletek:** A korábban mentett OpsGenie **-integrációs URL-cím** .
c. **Név:** Webhook neve, aliasa vagy azonosítója.
1. A műveleti csoport frissítéséhez válassza a **Mentés** lehetőséget.
## <a name="testing-your-webhook-integration-via-an-http-post-request"></a>Webhook-integráció tesztelése HTTP POST-kérelem használatával
1. Hozza létre a küldeni kívánt szolgáltatás-állapot adattartalmát. Az Azure-beli [tevékenységekre vonatkozó riasztások webhookok](../azure-monitor/platform/activity-log-alerts-webhook.md)szolgáltatásában például a Service Health webhook hasznos adatai találhatók.
1. Hozzon létre egy HTTP POST-kérelmet a következőképpen:
```
POST https://api.opsgenie.com/v1/json/azureservicehealth?apiKey=<APIKEY>
HEADERS Content-Type: application/json
BODY <service health payload>
```
1. A "sikeres" állapotú üzenettel `200 OK` választ kapni.
1. Nyissa meg a [OpsGenie](https://www.opsgenie.com/) , és ellenőrizze, hogy sikeresen beállította-e az integrációt.
## <a name="next-steps"></a>Következő lépések
- Megtudhatja, hogyan [konfigurálhat webhook-értesítéseket a meglévő probléma-felügyeleti rendszerekhez](service-health-alert-webhook-guide.md).
- Tekintse át a [tevékenység naplójának riasztása webhook sémáját](../azure-monitor/platform/activity-log-alerts-webhook.md).
- Tudnivalók a [szolgáltatás állapotával kapcsolatos értesítésekről](../azure-monitor/platform/service-notifications.md).
- További információ a [műveleti csoportokról](../azure-monitor/platform/action-groups.md). | 55.337079 | 531 | 0.775431 | hun_Latn | 0.999951 |
f484a70f7ab75a98cb51a8dd23e16aaf7bce0b70 | 20,751 | md | Markdown | course/content/serialize-instruction-data.md | ic-solutions-group/soldev-ui | 1284321ba6e57d75e968517af54214fab55a0185 | [
"MIT"
] | null | null | null | course/content/serialize-instruction-data.md | ic-solutions-group/soldev-ui | 1284321ba6e57d75e968517af54214fab55a0185 | [
"MIT"
] | null | null | null | course/content/serialize-instruction-data.md | ic-solutions-group/soldev-ui | 1284321ba6e57d75e968517af54214fab55a0185 | [
"MIT"
] | null | null | null | # Serializing Custom Instruction Data
# Lesson Objectives
*By the end of this lesson, you will be able to:*
- Explain the contents of a transaction
- Explain transaction instructions
- Explain the basics of Solana's runtime optimizations
- Explain Borsh
- Use Borsh to serialize custom instruction data
# TL;DR
- Transactions are made up of an array of instructions, a single transaction can have any number of instructions in it, each targeting its own program. When a transaction is submitted, the Solana runtime will process its instructions in order and atomically, meaning that if any of the instructions fail for any reason, the entire transaction will fail to be processed.
- Every *instruction* is made up of 3 components: the program_id of the intended program, an array of all account’s involved, and a byte buffer of instruction data.
- Every *transaction* contains: an array of all accounts it intends to read from or write to, one of more instructions, a recent blockhash, and one or more signatures.
- In order to pass instruction data from a client, it must be serialized into a byte buffer. To facilitate this process of serialization, we will be using [Borsh](https://borsh.io/).
- Transactions can fail to be processed by the blockchain for any number of reasons, we’ll discuss some of the most common ones here.
# Overview
## Transactions
Transactions are how we send information to the blockchain to be processed. So far, we’ve learned how to create very basic transactions with limited functionality. But transactions, and the programs they are sent to, can be designed to be far more flexible and handle far more complexity than we’ve dealt with up to now.
### Transaction Contents
Every transaction contains:
- An array that includes every account it intends to read from or write to
- One or more instructions
- A recent blockhash
- One or more signatures
`@solana/web3.js` simplifies this process for you so that all you really need to focus on is adding instructions and signatures. The library builds the array of accounts based on that information and handles the logic for including a recent blockhash.
### Instructions
Every instruction contains:
- The program id (public key) of the intended program
- An array that includes every account that will be read from or written to during execution
- A byte buffer of instruction data
Including the program id ensures that the instruction is carried out by the correct program.
Including an array of every account that we be read from or written to allows the network to perform a number of optimizations that allow for high transaction load and quicker execution.
Including the byte buffer lets you pass external data to a program.
You can include multiple instructions in a single transaction. The Solana runtime will process these instructions in order and atomically. In other words, if every instruction succeeds then the transaction as a whole will be successful, but if a single instruction fails then the entire transaction will fail immediately with no side-effects.
A note on the account array and optimization:
It is not just an array of the accounts’ public keys. Each object in the array includes the account’s public key, whether or not it is a signer on the transaction, and whether or not it is writable. Including whether or not an account is writable during the execution of an instruction allows the runtime to facilitate parallel processing of smart contracts. Because you must define which accounts are read-only and which you will write to, the runtime can determine which transactions are non-overlapping or read-only and allow them to execute concurrently. To learn more about the Solana’s runtime, check out this [blog post](https://solana.com/news/sealevel---parallel-processing-thousands-of-smart-contracts).
### Instruction Data
The ability to add arbitrary data to an instruction ensures that programs can be dynamic and flexible enough for broad use cases in the same way that the body of an HTTP request lets you build dynamic and flexible REST APIs.
Just as the structure of the body of an HTTP request is dependent on the endpoint you intend to call, the structure of the byte buffer used as instruction data is entirely dependent on the recipient program. If you’re building a full-stack dApp on your own, then you’ll need to copy the same structure that you used when building the program over to the client-side code. If you’re working with another developer who is handling the program development, you can coordinate to ensure matching buffer layouts.
Let’s think about a concrete example. Imagine working on a Web3 game and being responsible for writing client-side code that interacts with a player inventory program. The program was designed to allow the client to:
- Add inventory based on a player’s game-play results
- Transfer inventory from one player to another
- Equip a player with selected inventory items
This program would have been structured such that each of these is encapsulated in its own function.
Each program, however, only has one entry point. You would instruct the program on which of these functions to run through the instruction data.
You would also include in the instruction data any information the function needs in order to execute properly, e.g. an inventory item’s id, a player to transfer inventory to, etc.
Exactly *how* this data would be structured would depend on how the program was written, but it’s common to have the first field in instruction data be a number that the program can map to a function, after which additional fields act as function arguments.
## Serialization
In addition to knowing what information to include in an instruction data buffer, you also need to serialize it properly. The most common serializer used in Solana is [Borsh](https://borsh.io). Per the website:
> Borsh stands for Binary Object Representation Serializer for Hashing. It is meant to be used in security-critical projects as it prioritizes consistency, safety, speed; and comes with a strict specification.
>
Borsh maintains a [JS library](https://github.com/near/borsh-js) that handles serializing common types into a buffer. There are also other packages built on top of borsh that try to make this process even easier. We’ll be using the `@project-serum/borsh` library which can be installed using `npm`.
Building off of the previous game inventory example, let’s look at a hypothetical scenario where we are instructing the program to equip a player with a given item. Assume the program is designed to accept a buffer that represents a struct with the following properties:
1. `variant` as an unsigned, 8-bit integer that instructs the program which instruction, or function, to execute.
2. `playerId` as an unsigned, 16-bit integer that represents the player ID of the player who is to be equipped with the given item.
3. `itemId` as an unsigned, 256-bit integer that represents the item ID of the item that will be equipped to the given player.
All of this will be passed as a byte buffer that will be read in order, so ensuring that your buffer layout is ordered properly is crucial. You would create the buffer layout schema or template for the above as follows:
```tsx
import * as borsh from '@project-serum/borsh'
const equipPlayerSchema = borsh.struct([
borsh.u8('variant'),
borsh.u16('playerId'),
borsh.u256('itemId')
])
```
You can then encode data using this schema with the `encode` method. This method accepts as arguments an object representing the data to be serialized and a buffer. In the below example, we allocate a new buffer that’s much larger than needed, then encode the data into that buffer and slice it down into a new buffer that’s only as large as needed.
```tsx
import * as borsh from '@project-serum/borsh'
const equipPlayerSchema = borsh.struct([
borsh.u8('variant'),
borsh.u16('playerId'),
borsh.u256('itemId')
])
const buffer = Buffer.alloc(1000)
equipPlayerSchema.encode({ variant: 2, playerId: 1435, itemId: 737498 }, buffer)
const instructionBuffer = buffer.slice(0, equipPlayerSchema.getSpan(buffer))
```
Once a buffer is properly created and the data serialized, all that’s left is building the transaction. This is similar to what you’ve done in previous lessons. The example below assumes that:
- `player`, `playerInfoAccount`, and `PROGRAM_ID` are already defined somewhere outside the code snippet
- `player` is a user’s public key
- `playerInfoAccount` is the public key of the account where inventory changes will be written
- `SystemProgram` will be used in the process of executing the instruction.
```tsx
import * as borsh from '@project-serum/borsh'
import * as web3 from '@solana/web3.js'
const equipPlayerSchema = borsh.struct([
borsh.u8('variant'),
borsh.u16('playerId'),
borsh.u256('itemId')
])
const buffer = Buffer.alloc(1000)
equipPlayerSchema.encode({ variant: 2, playerId: 1435, itemId: 737498 }, buffer)
const instructionBuffer = buffer.slice(0, equipPlayerSchema.getSpan(buffer))
const endpoint = web3.clusterApiUrl('devnet')
const connection = new web3.Connection(endpoint)
const transaction = new web3.Transaction()
const instruction = new web3.TransactionInstruction({
keys: [
{
pubkey: player.publicKey,
isSigner: true,
isWritable: false,
},
{
pubkey: playerInfoAccount,
isSigner: false,
isWritable: true,
},
{
pubkey: web3.SystemProgram.programId,
isSigner: false,
isWritable: false,
}
],
data: instructionBuffer,
programId: PROGRAM_ID
})
transaction.add(instruction)
web3.sendAndConfirmTransaction(connection, transaction, [player]).then((txid) => {
console.log(`Transaction submitted: https://explorer.solana.com/tx/${txid}?cluster=devnet`)
})
```
# Demo
Let’s practice this together by building a Movie Review app that lets users submit a movie review and have it stored on Solana’s network. We’ll build this app a little bit at a time over the next few lessons, adding new functionality each lesson.
![Screenshot of movie review frontend](../assets/movie-reviews-frontend.png)
The public key for the Solana program we’ll use for this application is `CenYq6bDRB7p73EjsPEpiYN7uveyPUTdXkDkgUduboaN`.
### 1. Download the starter code
Before we get started, go ahead and download the [starter code](https://github.com/Unboxed-Software/solana-movie-frontend/tree/starter).
The project is a fairly simple Next.js application. It includes the `WalletContextProvider` we created in the Wallets lesson, a `Card` component for displaying a movie review, a `MovieList` component that displays reviews in a list, a `Form` component for submitting a new review, and a `Movie.ts` file that contains a class definition for a `Movie` object.
Note that for now, the movies displayed on the page when you run `npm run dev` are mocks. In this lesson, we’ll focus on adding a new review but we won’t actually be able to see that review displayed. Next lesson, we’ll focus on deserializing custom data from on-chain accounts.
### 2. Create the buffer layout
Remember that to properly interact with a Solana program, you need to know how it expects data to be structured. Our Movie Review program is expecting instruction data to contain:
1. `variant` as an unsigned, 8-bit integer representing which instruction should be executed (in other words which function on the program should be called).
2. `title` as a string representing the title of the movie that you are reviewing.
3. `rating` as an unsigned, 8-bit integer representing the rating out of 5 that you are giving to the movie you are reviewing.
4. `description` as a string representing the written portion of the review you are leaving for the movie.
Let’s configure a `borsh` layout in the `Movie` class. Start by importing `@project-serum/borsh`. Next, create a `borshInstructionSchema` property and set it to the appropriate `borsh` struct containing the properties listed above.
```tsx
import * as borsh from '@project-serum/borsh'
export class Movie {
title: string;
rating: number;
description: string;
...
borshInstructionSchema = borsh.struct([
borsh.u8('variant'),
borsh.str('title'),
borsh.u8('rating'),
borsh.str('description'),
])
}
```
Keep in mind that *order matters*. If the order of properties here differs from how the program is structured, the transaction will fail.
### 3. Create a method to serialize data
Now that we have the buffer layout set up, let’s create a method in `Movie` called `serialize()` that will return a `Buffer` with a `Movie` object’s properties encoded into the appropriate layout.
```tsx
import * as borsh from '@project-serum/borsh'
export class Movie {
title: string;
rating: number;
description: string;
...
borshInstructionSchema = borsh.struct([
borsh.u8('variant'),
borsh.str('title'),
borsh.u8('rating'),
borsh.str('description'),
])
serialize(): Buffer {
const buffer = Buffer.alloc(1000)
this.borshInstructionSchema.encode({ ...this, variant: 0 }, buffer)
return buffer.slice(0, this.borshInstructionSchema.getSpan(buffer))
}
}
```
The method shown above first creates a large enough buffer for our object, then encodes `{ ...this, variant: 0 }` into the buffer. Because the `Movie` class definition contains 3 of the 4 properties required by the buffer layout and uses the same naming, we could use it directly with the spread operator and just add the `variant` property. Finally, the method returns a new buffer that leaves off the unused portion of the original.
### 4. Send transaction when user submits form
Now that we have the building blocks for the instruction data, we can create and send the transaction when a user submits the form. Open `Form.tsx` and locate the `handleTransactionSubmit` function. This gets called by `handleSubmit` each time a user submits the Movie Review form.
Inside this function, we’ll be creating and sending the transaction that contains the data submitted through the form.
Start by importing `@solana/web3.js` and importing `useConnection` and `useWallet` from `@solana/wallet-adapter-react`.
```tsx
import { FC } from 'react'
import { Movie } from '../models/Movie'
import { useState } from 'react'
import { Box, Button, FormControl, FormLabel, Input, NumberDecrementStepper, NumberIncrementStepper, NumberInput, NumberInputField, NumberInputStepper, Textarea } from '@chakra-ui/react'
import * as web3 from '@solana/web3.js'
import { useConnection, useWallet } from '@solana/wallet-adapter-react'
```
Next, before the `handleSubmit` function, call `useConnection()` to get a `connection` object and call `useWallet()` to get `publicKey` and `sendTransaction`.
```tsx
import { FC } from 'react'
import { Movie } from '../models/Movie'
import { useState } from 'react'
import { Box, Button, FormControl, FormLabel, Input, NumberDecrementStepper, NumberIncrementStepper, NumberInput, NumberInputField, NumberInputStepper, Textarea } from '@chakra-ui/react'
import * as web3 from '@solana/web3.js'
import { useConnection, useWallet } from '@solana/wallet-adapter-react'
const MOVIE_REVIEW_PROGRAM_ID = 'CenYq6bDRB7p73EjsPEpiYN7uveyPUTdXkDkgUduboaN'
export const Form: FC = () => {
const [title, setTitle] = useState('')
const [rating, setRating] = useState(0)
const [message, setMessage] = useState('')
const { connection } = useConnection();
const { publicKey, sendTransaction } = useWallet();
const handleSubmit = (event: any) => {
event.preventDefault()
const movie = new Movie(title, rating, description)
handleTransactionSubmit(movie)
}
...
}
```
Before we implement `handleTransactionSubmit`, let’s talk about what needs to be done. We need to:
1. Check that `publicKey` exists to ensure that the user has connected their wallet.
2. Call `serialize()` on `movie` to get a buffer representing the instruction data.
3. Create a new `Transaction` object.
4. Get all of the accounts that the transaction will read from or write to.
5. Create a new `Instruction` object that includes all of these accounts in the `keys` argument, includes the buffer in the `data` argument, and includes the program’s public key in the `programId` argument.
6. Add the instruction from the last step to the transaction.
7. Call `sendTransaction`, passing in the assembled transaction.
That’s quite a lot to process! But don’t worry, it gets easier the more you do it. Let’s start with the first 3 steps from above:
```tsx
const handleTransactionSubmit = async (movie: Movie) => {
if (!publicKey) {
alert('Please connect your wallet!')
return
}
const buffer = movie.serialize()
const transaction = new web3.Transaction()
}
```
The next step is to get all of the accounts that the transaction will read from or write to. In past lessons, the account where data will be stored has been given to you. This time, the account’s address is more dynamic, so it needs to be computed. We’ll cover this in depth in the next lesson, but for now you can use the following, where `pda` is the address to the account where data will be stored:
```tsx
const [pda] = await web3.PublicKey.findProgramAddress(
[publicKey.toBuffer(), Buffer.from(movie.title)],
new web3.PublicKey(MOVIE_REVIEW_PROGRAM_ID)
)
```
In addition to this account, the program will also need to read from `SystemProgram`, so our array needs to include `web3.SystemProgram.programId` as well.
With that, we can finish the remaining steps:
```tsx
const handleTransactionSubmit = async (movie: Movie) => {
if (!publicKey) {
alert('Please connect your wallet!')
return
}
const buffer = movie.serialize()
const transaction = new web3.Transaction()
const [pda] = await web3.PublicKey.findProgramAddress(
[publicKey.toBuffer(), new TextEncoder().encode(movie.title)],
new web3.PublicKey(MOVIE_REVIEW_PROGRAM_ID)
)
const instruction = new web3.TransactionInstruction({
keys: [
{
pubkey: publicKey,
isSigner: true,
isWritable: false,
},
{
pubkey: pda,
isSigner: false,
isWritable: true
},
{
pubkey: web3.SystemProgram.programId,
isSigner: false,
isWritable: false
}
],
data: buffer,
programId: new web3.PublicKey(MOVIE_REVIEW_PROGRAM_ID)
})
transaction.add(instruction)
try {
let txid = await sendTransaction(transaction, connection)
console.log(`Transaction submitted: https://explorer.solana.com/tx/${txid}?cluster=devnet`)
} catch (e) {
alert(JSON.stringify(e))
}
}
```
And that’s it! You should be able to use the form on the site now to submit a movie review. While you won’t see the UI update to reflect the new review, you can look at the transaction’s program logs on Solana Explorer to see that it was successful.
If you need a bit more time with this project to feel comfortable, have a look at the complete [solution code](https://github.com/Unboxed-Software/solana-movie-frontend/tree/solution-serialize-instruction-data).
# Challenge
Now it’s your turn to build something independently. Create an application that lets students of this course introduce themselves! The Solana program that supports this is at `HdE95RSVsdb315jfJtaykXhXY478h53X6okDupVfY9yf`.
![Screenshot of Student Intros frontend](../assets/student-intros-frontend.png)
1. You can build this from scratch or you can download the starter code [here](https://github.com/Unboxed-Software/solana-student-intros-frontend/tree/starter).
2. Create the instruction buffer layout in `StudentIntro.ts`. The program expects instruction data to contain:
1. `variant` as an unsigned, 8-bit integer representing the instruction to run (should be 0).
2. `name` as a string representing the student's name.
3. `message` as a string representing the message the student is sharing about their Solana journey.
3. Create a method in `StudentIntro.ts` that will use the buffer layout to serialize a `StudentIntro` object.
4. In the `Form` component, implement the `handleTransactionSubmit` function so that it serializes a `StudentIntro`, builds the appropriate transaction instructions and transaction, and submits the transaction to the user's wallet.
5. You should be able to submit now and have the information stored on chain! Be sure to log the transaction id and look at it in Solana Explorer to verify that it worked.
If you get really stumped, feel free to check out the solution code [here](https://github.com/Unboxed-Software/solana-student-intros-frontend/tree/solution-serialize-instruction-data).
Feel free to get creative with these challenges and take them even further. The instructions aren't here to hold you back! | 50.735941 | 713 | 0.764156 | eng_Latn | 0.997493 |
f484e7d94fc3b7ceec1a75c4c4de302192957f44 | 2,149 | md | Markdown | docs/visual-basic/misc/bc31143.md | rprouse/docs-microsoft | af49757a2295db7fab1a4aea118fbb896861dba8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc31143.md | rprouse/docs-microsoft | af49757a2295db7fab1a4aea118fbb896861dba8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc31143.md | rprouse/docs-microsoft | af49757a2295db7fab1a4aea118fbb896861dba8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Method '<methodname>' does not have a signature compatible with delegate <'delegatename'>"
ms.date: 07/20/2015
ms.prod: .net
ms.technology:
- "devlang-visual-basic"
ms.topic: "article"
f1_keywords:
- "vbc31143"
- "bc31143"
helpviewer_keywords:
- "BC31143"
ms.assetid: 88990637-7c92-467e-a3d3-db5498dc1dce
caps.latest.revision: 9
author: dotnet-bot
ms.author: dotnetcontent
---
# Method '<methodname>' does not have a signature compatible with delegate <'delegatename'>
This error occurs when a conversion is required between a method and a delegate that is not possible. The cause of the error can be conversion between parameters or, when the method and delegate are functions, conversion in the return values.
The following code illustrates failed conversions. The delegate is `FunDel`.
```vb
Delegate Function FunDel(ByVal i As Integer, ByVal d As Double) As Integer
```
The following functions each differ from `FunDel` in a way that will cause this error.
```vb
Function ExampleMethod1(ByVal m As Integer, ByVal aDate As Date) As Integer
End Function
Function ExampleMethod2(ByVal m As Integer, ByVal aDouble As Double) As Date
End Function
```
Each of the following assignment statements causes the error.
```vb
Sub Main()
' The second parameters of FunDel and ExampleMethod1, Double and Date,
' are not compatible.
'Dim d1 As FunDel = AddressOf ExampleMethod1
' The return types of FunDel and ExampleMethod2, Integer and Date,
' are not compatible.
'Dim d2 As FunDel = AddressOf ExampleMethod2
End Sub
```
**Error ID:** BC31143
## To correct this error
- Examine the corresponding parameters and, if they are present, return types to determine which pair is not compatible.
## See Also
[Relaxed Delegate Conversion](../../visual-basic/programming-guide/language-features/delegates/relaxed-delegate-conversion.md)
[NOT IN BUILD: Delegates and the AddressOf Operator](http://msdn.microsoft.com/en-us/7b2ed932-8598-4355-b2f7-5cedb23ee86f)
| 35.229508 | 244 | 0.723592 | eng_Latn | 0.96001 |
f484fccb8f47f68b599f10e8fde4b74e9efee05c | 1,165 | md | Markdown | README.md | k-kawa/awesome-viz | 47fd8d668fe7706d804bf55366c67f7245e6c29b | [
"MIT"
] | 1 | 2016-11-07T05:20:31.000Z | 2016-11-07T05:20:31.000Z | README.md | k-kawa/awesome-viz | 47fd8d668fe7706d804bf55366c67f7245e6c29b | [
"MIT"
] | null | null | null | README.md | k-kawa/awesome-viz | 47fd8d668fe7706d804bf55366c67f7245e6c29b | [
"MIT"
] | null | null | null | # awesome-viz
Curated list of AWESOME visualization tools
## Dashboard
- [Redash](https://github.com/getredash/redash) Make Your Company Data Driven. Connect to any data source, easily visualize and share your data
- [Caravel](https://github.com/airbnb/caravel) Caravel is a data exploration platform designed to be visual, intuitive, and interactive
- [Kibana](https://github.com/elastic/kibana) Kibana analytics and search dashboard for Elasticsearch
- [Grafana](https://github.com/grafana/grafana) Gorgeous metric viz, dashboards & editors for Graphite, InfluxDB & OpenTSDB
## Data Discovery
- [metacat](https://github.com/Netflix/metacat) [Making Big Data Discoverable and Meaningful at Netflix](https://medium.com/netflix-techblog/metacat-making-big-data-discoverable-and-meaningful-at-netflix-56fb36a53520)
- [WhereHows](https://github.com/linkedin/WhereHows) Data Discovery and Lineage for Big Data Ecosystem
- [Atlas](https://atlas.apache.org/) Data Governance and Metadata framework for Hadoop
- [Alation](https://alation.com/) Alation Data Catalog: A Single Source of Reference
## How to contribute
Send me a pull request. Any pull request is welcome!
| 64.722222 | 217 | 0.785408 | yue_Hant | 0.287897 |
f485b71a3aec8044cfe20c5e2c7d1a05f29be118 | 680 | md | Markdown | 2020/CVE-2020-23653.md | sei-vsarvepalli/cve | fbd9def72dd8f1b479c71594bfd55ddb1c3be051 | [
"MIT"
] | 4 | 2022-03-01T12:31:42.000Z | 2022-03-29T02:35:57.000Z | 2020/CVE-2020-23653.md | sei-vsarvepalli/cve | fbd9def72dd8f1b479c71594bfd55ddb1c3be051 | [
"MIT"
] | null | null | null | 2020/CVE-2020-23653.md | sei-vsarvepalli/cve | fbd9def72dd8f1b479c71594bfd55ddb1c3be051 | [
"MIT"
] | 1 | 2022-03-29T02:35:58.000Z | 2022-03-29T02:35:58.000Z | ### [CVE-2020-23653](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-23653)
![](https://img.shields.io/static/v1?label=Product&message=n%2Fa&color=blue)
![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue)
![](https://img.shields.io/static/v1?label=Vulnerability&message=n%2Fa&color=brighgreen)
### Description
An insecure unserialize vulnerability was discovered in ThinkAdmin versions 4.x through 6.x in app/admin/controller/api/Update.php and app/wechat/controller/api/Push.php, which may lead to arbitrary remote code execution.
### POC
#### Reference
- https://github.com/zoujingli/ThinkAdmin/issues/238
#### Github
No GitHub POC found.
| 37.777778 | 221 | 0.755882 | eng_Latn | 0.331122 |
f486b3faa2eae3e9fe44a14544563be29dac19b2 | 6,996 | md | Markdown | wdk-ddi-src/content/pep_x/ns-pep_x-_pep_work_information.md | kein284/windows-driver-docs-ddi | 3b70e03c2ddd9d8d531f4dc2fb9a4bb481b7dd54 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/pep_x/ns-pep_x-_pep_work_information.md | kein284/windows-driver-docs-ddi | 3b70e03c2ddd9d8d531f4dc2fb9a4bb481b7dd54 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/pep_x/ns-pep_x-_pep_work_information.md | kein284/windows-driver-docs-ddi | 3b70e03c2ddd9d8d531f4dc2fb9a4bb481b7dd54 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:pep_x._PEP_WORK_INFORMATION
title: _PEP_WORK_INFORMATION (pep_x.h)
description: The PEP_WORK_INFORMATION structure describes a work item that the PEP is submitting to the Windows power management framework (PoFx).
old-location: kernel\pep_work_information.htm
tech.root: kernel
ms.assetid: 7A3B2A94-AE6F-4DCC-9CDF-E2D5799C9F0D
ms.date: 04/30/2018
keywords: ["PEP_WORK_INFORMATION structure"]
ms.keywords: "*PPEP_WORK_INFORMATION, PEP_WORK_INFORMATION, PEP_WORK_INFORMATION structure [Kernel-Mode Driver Architecture], PPEP_WORK_INFORMATION, PPEP_WORK_INFORMATION structure pointer [Kernel-Mode Driver Architecture], _PEP_WORK_INFORMATION, kernel.pep_work_information, pepfx/PEP_WORK_INFORMATION, pepfx/PPEP_WORK_INFORMATION"
req.header: pep_x.h
req.include-header: Pep_x.h
req.target-type: Windows
req.target-min-winverclnt: Supported starting with Windows 10.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames: PEP_WORK_INFORMATION, *PPEP_WORK_INFORMATION
f1_keywords:
- _PEP_WORK_INFORMATION
- pep_x/_PEP_WORK_INFORMATION
- PPEP_WORK_INFORMATION
- pep_x/PPEP_WORK_INFORMATION
- PEP_WORK_INFORMATION
- pep_x/PEP_WORK_INFORMATION
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- pepfx.h
api_name:
- PEP_WORK_INFORMATION
---
# _PEP_WORK_INFORMATION structure
## -description
The <b>PEP_WORK_INFORMATION</b> structure describes a work item that the PEP is submitting to the Windows <a href="https://docs.microsoft.com/windows-hardware/drivers/kernel/overview-of-the-power-management-framework">power management framework</a> (PoFx).
## -struct-fields
### -field WorkType
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ne-pepfx-_pep_work_type">PEP_WORK_TYPE</a> enumeration value. This member indicates the type of work requested by the PEP, which also determines the type of structure that is contained in the unnamed union in the <b>PEP_WORK_INFORMATION</b> structure.
### -field PowerControl
### -field CompleteIdleState
### -field CompletePerfState
### -field AcpiNotify
### -field ControlMethodComplete
#### - ( unnamed union )
The data structure that is associated with the type of work specified by the <b>WorkType</b> member.
#### PowerControl
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_power_control">PEP_WORK_POWER_CONTROL</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkRequestPowerControl</b>.
#### CompleteIdleState
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_complete_idle_state">PEP_WORK_COMPLETE_IDLE_STATE</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkCompleteIdleState</b>.
#### CompletePerfState
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_complete_perf_state">PEP_WORK_COMPLETE_PERF_STATE</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkCompletePerfState</b>.
#### AcpiNotify
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_acpi_notify">PEP_WORK_ACPI_NOTIFY</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkAcpiNotify</b>.
#### ControlMethodComplete
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_acpi_evaluate_control_method_complete">PEP_WORK_ACPI_EVALUATE_CONTROL_METHOD_COMPLETE</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkAcpiEvaluateControlMethodComplete</b>.
##### - ( unnamed union ).AcpiNotify
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_acpi_notify">PEP_WORK_ACPI_NOTIFY</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkAcpiNotify</b>.
##### - ( unnamed union ).CompleteIdleState
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_complete_idle_state">PEP_WORK_COMPLETE_IDLE_STATE</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkCompleteIdleState</b>.
##### - ( unnamed union ).CompletePerfState
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_complete_perf_state">PEP_WORK_COMPLETE_PERF_STATE</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkCompletePerfState</b>.
##### - ( unnamed union ).ControlMethodComplete
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_acpi_evaluate_control_method_complete">PEP_WORK_ACPI_EVALUATE_CONTROL_METHOD_COMPLETE</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkAcpiEvaluateControlMethodComplete</b>.
##### - ( unnamed union ).PowerControl
A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_power_control">PEP_WORK_POWER_CONTROL</a> structure. This structure is used if <b>WorkType</b> = <b>PepWorkRequestPowerControl</b>.
## -remarks
The <b>WorkInformation</b> member of the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work">PEP_WORK</a> structure is a pointer to a <b>PEP_WORK_INFORMATION</b> structure.
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/kernel/using-peps-for-acpi-services">PEP_DPM_WORK</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work">PEP_WORK</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_acpi_notify">PEP_WORK_ACPI_NOTIFY</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pep_x/ns-pep_x-_pep_work_active_complete">PEP_WORK_ACTIVE_COMPLETE</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_complete_idle_state">PEP_WORK_COMPLETE_IDLE_STATE</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_complete_perf_state">PEP_WORK_COMPLETE_PERF_STATE</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pep_x/ns-pep_x-_pep_work_device_idle">PEP_WORK_DEVICE_IDLE</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pep_x/ns-pep_x-_pep_work_device_power">PEP_WORK_DEVICE_POWER</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pep_x/ns-pep_x-_pep_work_idle_state">PEP_WORK_IDLE_STATE</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ns-pepfx-_pep_work_power_control">PEP_WORK_POWER_CONTROL</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/pepfx/ne-pepfx-_pep_work_type">PEP_WORK_TYPE</a>
| 38.229508 | 333 | 0.767439 | yue_Hant | 0.67339 |
f486ee471e9835d40d7a000f6d7f78e6465cdc84 | 19 | md | Markdown | README.md | musan12/musan12.github.io | b607a1a1587f44561d4b114ce20fcd96440ce0b3 | [
"MIT"
] | null | null | null | README.md | musan12/musan12.github.io | b607a1a1587f44561d4b114ce20fcd96440ce0b3 | [
"MIT"
] | null | null | null | README.md | musan12/musan12.github.io | b607a1a1587f44561d4b114ce20fcd96440ce0b3 | [
"MIT"
] | null | null | null | # musan12.github.io | 19 | 19 | 0.789474 | vie_Latn | 0.160076 |
f487353957a6ea55d1a24ec634622468f3236414 | 2,582 | md | Markdown | README.md | iphelix/airoscapy | f7eb5be999c274bbdbd31970e842578cf4fe4923 | [
"MIT"
] | 5 | 2019-01-31T03:13:30.000Z | 2022-02-03T07:53:01.000Z | README.md | iphelix/airoscapy | f7eb5be999c274bbdbd31970e842578cf4fe4923 | [
"MIT"
] | null | null | null | README.md | iphelix/airoscapy | f7eb5be999c274bbdbd31970e842578cf4fe4923 | [
"MIT"
] | 3 | 2019-03-07T18:00:02.000Z | 2022-02-03T07:53:03.000Z | Airoscapy is a passive wireless access point scanner similar to Kismet and Aircrack. The tool was built to demonstrate the power of Scapy as a generic and powerful network monitoring library.
Scapy
====
The tool requires installation of Scapy. You can obtain Scapy either directly from the developer <http://www.secdev.org/projects/scapy/> or use your operating system's packaging system to download and install it. For example, here is how you would set it up on an Ubuntu system:
sudo apt-get install python-scapy
Setting up a monitor interface
==================
In order to detect Access Points without sending Probe requests, you must enable monitor mode on your card. On a modern linux system you can use **iw** wireless configuration utility to set up a monitor interface. Assuming you existing wireless card is identified as **wlan0** follow these commands to create a **mon0** monitoring interface:
# iw dev wlan0 interface add mon0 type monitor
# ifconfig mon0 up
Test whether you can capture packets by using **tcpdump**:
# tcpdump -i mon0
You should see a stream of packets looking something like this:
18:42:49.448730 1.0 Mb/s 2462 MHz 11b -82dB signal antenna 1 [bit 14] Beacon (linksys) [1.0* 2.0* 5.5* 11.0* 18.0 24.0 36.0 54.0 Mbit] ESS CH: 11, PRIVACY[|802.11]
18:42:49.452508 2.0 Mb/s 2462 MHz 11b -76dB signal antenna 1 [bit 14] Beacon (cisco) [1.0* 2.0* 5.5 11.0 Mbit] ESS CH: 11, PRIVACY
18:42:49.465215 1.0 Mb/s 2462 MHz 11b -78dB signal antenna 1 [bit 14] Beacon (buffalo) [1.0* 2.0* 5.5* 11.0* 6.0 9.0 12.0 18.0 Mbit] ESS CH: 11, PRIVACY[|802.11]
...
Running Airoscapy
============
Using Airoscapy is very straightforward. Simply specify a monitor interface as the first command line parameter and execute the tool with root privileges. Press CTRL-C to stop scanning.
# python airoscapy.py mon0
-=-=-=-=-=-= AIROSCAPY =-=-=-=-=-=-
CH ENC BSSID SSID
11 Y c0:c1:c0:12:34:56 caesars
11 Y 00:24:01:12:34:56 cosmopolitan
11 Y c0:c1:c0:12:34:56 encore
09 N c0:c1:c0:12:34:56 excalibur
09 Y 00:16:b6:12:34:56 flamingo
05 N 00:16:b6:12:34:56 golden nugget
01 Y e0:91:f5:12:34:56 hard rock
01 Y 00:22:3f:12:34:56 harrahs
07 Y 00:1e:52:12:34:56 luxor
07 Y c0:3f:0e:12:34:56 mgm
07 Y 00:30:bd:12:34:56 mirage
10 Y 00:25:9c:12:34:56 nyny
11 N c0:c1:c0:12:34:56 palms
11 Y 00:14:6c:12:34:56 rio
^C
-=-=-=-=-= STATISTICS =-=-=-=-=-=-
Total APs found: 14
Encrypted APs : 11
Unencrypted APs: 3
| 43.033333 | 341 | 0.67622 | eng_Latn | 0.88561 |
f4876a209a3406f70055d453190745213d893d32 | 3,198 | md | Markdown | docs/framework/unmanaged-api/debugging/icordebugblockingobjectenum-next-method.md | michaelgoin/dotnet-docs | 89e583ef512e91fb9910338acd151ee1352a3799 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugblockingobjectenum-next-method.md | michaelgoin/dotnet-docs | 89e583ef512e91fb9910338acd151ee1352a3799 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugblockingobjectenum-next-method.md | michaelgoin/dotnet-docs | 89e583ef512e91fb9910338acd151ee1352a3799 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "ICorDebugBlockingObjectEnum::Next Method"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-clr"
ms.tgt_pltfrm: ""
ms.topic: "reference"
api_name:
- "ICorDebugBlockingObjectEnum.Next Method"
api_location:
- "mscordbi.dll"
api_type:
- "COM"
f1_keywords:
- "ICorDebugBlockingObjectEnum::Next"
helpviewer_keywords:
- "Next method, ICorDebugBlockingObjectEnum interface [.NET Framework debugging]"
- "ICorDebugBlockingObjectEnum::Next method [.NET Framework debugging]"
ms.assetid: 0121753f-ebea-48d0-aeb2-ed7fda76dc60
topic_type:
- "apiref"
caps.latest.revision: 7
author: "rpetrusha"
ms.author: "ronpet"
manager: "wpickett"
---
# ICorDebugBlockingObjectEnum::Next Method
Gets the specified number of [CorDebugBlockingObject](../../../../docs/framework/unmanaged-api/debugging/cordebugblockingobject-structure.md) objects from the enumeration, starting at the current position.
## Syntax
```
HRESULT Next([in] ULONG celt,
[out, size_is(celt), length_is(*pceltFetched)]
CorDebugBlockingOjbect values[],
[out] ULONG *pceltFetched;
```
#### Parameters
`celt`
[in] The number of objects to retrieve.
`values`
[out] An array of pointers to [CorDebugBlockingObject](../../../../docs/framework/unmanaged-api/debugging/cordebugblockingobject-structure.md) objects.
`pceltFetched`
[out] A pointer to the number of objects that were retrieved.
## Return Value
This method returns the following specific HRESULTs.
|HRESULT|Description|
|-------------|-----------------|
|S_OK|The method completed successfully.|
|S_FALSE|`pceltFetched` does not equal `celt`.|
## Remarks
This method functions like a typical COM enumerator.
The input array values must be at least of size `celt`. The array will be filled with either the next `celt` values in the enumeration or with all remaining values if fewer than `celt` remain. When this method returns, `pceltFetched` will be filled with the number of values that were retrieved. If `values` contains invalid pointers or points to a buffer that is smaller than `celt`, or if `pceltFetched` is an invalid pointer, the result is undefined.
> [!NOTE]
> Although the [CorDebugBlockingObject](../../../../docs/framework/unmanaged-api/debugging/cordebugblockingobject-structure.md) structure does not need to be released, the "ICorDebugValue" interface inside of it does need to be released.
-
## Requirements
**Platforms:** See [System Requirements](../../../../docs/framework/get-started/system-requirements.md).
**Header:** CorDebug.idl, CorDebug.h
**Library:** CorGuids.lib
**.NET Framework Versions:** [!INCLUDE[net_current_v40plus](../../../../includes/net-current-v40plus-md.md)]
## See Also
[ICorDebugDataTarget Interface](../../../../docs/framework/unmanaged-api/debugging/icordebugdatatarget-interface.md)
[Debugging Interfaces](../../../../docs/framework/unmanaged-api/debugging/debugging-interfaces.md)
[Debugging](../../../../docs/framework/unmanaged-api/debugging/index.md)
| 38.071429 | 456 | 0.701689 | eng_Latn | 0.649215 |
f48773267de799ec92c0addba98a2ccb35ee5bdb | 1,085 | md | Markdown | website/translated_docs/en-US/version-20.2.0/toolchain/microscope/microscope-intro.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 7 | 2019-12-26T08:38:06.000Z | 2020-12-17T09:29:01.000Z | website/translated_docs/en-US/version-20.2.0/toolchain/microscope/microscope-intro.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 38 | 2019-12-03T09:51:40.000Z | 2020-12-01T02:49:42.000Z | website/translated_docs/en-US/version-20.2.0/toolchain/microscope/microscope-intro.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 16 | 2019-12-03T06:15:55.000Z | 2022-02-20T12:04:01.000Z | ---
title: Microscope Introduction
id: version-20.2.0-microscope-intro
sidebar_label: Microscope Introduction
original_id: microscope-intro
---
Microscope 区块链浏览器,可用于查询所有 CITA 链上信息,并支持多链,可通过在元数据面板中切换目标链。技术栈为 React、TypeScript、Rxjs。主要功能模块包括:首页信息、区块信息、交易信息、账户信息、账户信息、配置页面。可访问[官方浏览器](https://microscope.citahub.com/#/)进行体验。
## 开发&用户手册
https://github.com/citahub/microscope-v2/blob/develop/README-CN.md
## 项目结构
* 主要业务逻辑位于 src 目录下
* components: 可复用的组件
* container: 页面和一些顶层组件( Dialog 等)
* context: 全局状态控制, 目前包括页面配置和全局 Observable
* Routes: 路由组件, 控制页面跳转
## 注意事项
* 对于新区块的监控位于 context 中的 newBlockByNumberSubject 这个 Observable, 全局共用;
* i18n 方案采用了 i18next 和 locize 服务, 自动读取页面上的字段到 locize 上, 翻译后在线替换页面上的文本;
* Microscope 设计之初考虑到直连节点和连接缓存服务器两种情况, 所以大部分接口还是 rpc 请求;
* 考虑到浏览器的轻量化, sdk 使用单独封装的 @appchain/observables, `@citahub/cita-signer`;
* 引入签名模块对交易信息进行一次解签。
* 对于交易详情的处理, 由于 rpc 接口返回的交易只包含序列化后的交易信息, 需要通过 `@citahub/cita-signer` 的 unsign 方法来解析。
## 期待协作
https://github.com/citahub/microscope-v2/issues
我们正在招募社区开发者,想要获得更多资讯欢迎申请加入 CITAHub:https://www.citahub.com/#joinArea
| 28.552632 | 174 | 0.767742 | yue_Hant | 0.862212 |
f48783779fe4287a2f301a8bb6f6d8a2bdac64be | 17,779 | md | Markdown | docs/setup/security/shiro_authentication.md | scMarkus/zeppelin | 78b7b3ad3377deabfd355223cdd21d2470b4ca54 | [
"Apache-2.0"
] | 1 | 2022-03-16T08:35:03.000Z | 2022-03-16T08:35:03.000Z | docs/setup/security/shiro_authentication.md | scMarkus/zeppelin | 78b7b3ad3377deabfd355223cdd21d2470b4ca54 | [
"Apache-2.0"
] | 6 | 2022-03-23T12:30:02.000Z | 2022-03-30T21:09:28.000Z | docs/setup/security/shiro_authentication.md | scMarkus/zeppelin | 78b7b3ad3377deabfd355223cdd21d2470b4ca54 | [
"Apache-2.0"
] | 3 | 2021-10-01T11:04:17.000Z | 2021-12-15T14:12:48.000Z | ---
layout: page
title: "Apache Shiro Authentication for Apache Zeppelin"
description: "Apache Shiro is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. This document explains step by step how Shiro can be used for Zeppelin notebook authentication."
group: setup/security
---
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
{% include JB/setup %}
# Apache Shiro authentication for Apache Zeppelin
<div id="toc"></div>
## Overview
[Apache Shiro](http://shiro.apache.org/) is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. In this documentation, we will explain step by step how Shiro works for Zeppelin notebook authentication.
When you connect to Apache Zeppelin, you will be asked to enter your credentials. Once you logged in, then you have access to all notes including other user's notes.
## Important Note
By default, Zeppelin allows anonymous access. It is strongly recommended that you consider setting up Apache Shiro for authentication (as described in this document, see 2 Secure the Websocket channel), or only deploy and use Zeppelin in a secured and trusted environment.
## Security Setup
You can setup **Zeppelin notebook authentication** in some simple steps.
### 1. Enable Shiro
By default in `conf`, you will find `shiro.ini.template`, this file is used as an example and it is strongly recommended
to create a `shiro.ini` file by doing the following command line
```bash
cp conf/shiro.ini.template conf/shiro.ini
```
For the further information about `shiro.ini` file format, please refer to [Shiro Configuration](http://shiro.apache.org/configuration.html#Configuration-INISections).
### 2. Start Zeppelin
```bash
bin/zeppelin-daemon.sh start #(or restart)
```
Then you can browse Zeppelin at [http://localhost:8080](http://localhost:8080).
### 3. Login
Finally, you can login using one of the below **username/password** combinations.
<center><img src="{{BASE_PATH}}/assets/themes/zeppelin/img/docs-img/zeppelin-login.png"></center>
```
[users]
admin = password1, admin
user1 = password2, role1, role2
user2 = password3, role3
user3 = password4, role2
```
You can set the roles for each users next to the password.
## Groups and permissions (optional)
In case you want to leverage user groups and permissions, use one of the following configuration for LDAP or AD under `[main]` segment in `shiro.ini`.
```
activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = userNameA
activeDirectoryRealm.systemPassword = passwordA
activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM
activeDirectoryRealm.url = ldap://ldap.test.com:389
activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"
activeDirectoryRealm.authorizationCachingEnabled = false
activeDirectoryRealm.principalSuffix = @corp.company.net
ldapRealm = org.apache.zeppelin.realm.LdapGroupRealm
# search base for ldap groups (only relevant for LdapGroupRealm):
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc=COM
ldapRealm.contextFactory.url = ldap://ldap.test.com:389
ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COM
ldapRealm.contextFactory.authenticationMechanism = simple
```
also define roles/groups that you want to have in system, like below;
```
[roles]
admin = *
hr = *
finance = *
group1 = *
```
## Configure Realm (optional)
Realms are responsible for authentication and authorization in Apache Zeppelin. By default, Apache Zeppelin uses [IniRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/text/IniRealm.html) (users and groups are configurable in `conf/shiro.ini` file under `[user]` and `[group]` section). You can also leverage Shiro Realms like [JndiLdapRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/ldap/JndiLdapRealm.html), [JdbcRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/jdbc/JdbcRealm.html) or create [our own](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/AuthorizingRealm.html).
To learn more about Apache Shiro Realm, please check [this documentation](http://shiro.apache.org/realm.html).
We also provide community custom Realms.
**Note**: When using any of the below realms the default
password-based (IniRealm) authentication needs to be disabled.
### Active Directory
```
activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = userNameA
activeDirectoryRealm.systemPassword = passwordA
activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceks
activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM
activeDirectoryRealm.url = ldap://ldap.test.com:389
activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"
activeDirectoryRealm.authorizationCachingEnabled = false
activeDirectoryRealm.principalSuffix = @corp.company.net
```
Also instead of specifying systemPassword in clear text in shiro.ini administrator can choose to specify the same in "hadoop credential".
Create a keystore file using the hadoop credential commandline, for this the hadoop commons should be in the classpath
`hadoop credential create activeDirectoryRealm.systempassword -provider jceks://file/user/zeppelin/conf/zeppelin.jceks`
Change the following values in the Shiro.ini file, and uncomment the line:
`activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceks`
### LDAP
Two options exist for configuring an LDAP Realm. The simpler to use is the LdapGroupRealm. How ever it has limited
flexibility with mapping of ldap groups to users and for authorization for user groups. A sample configuration file for
this realm is given below.
```
ldapRealm = org.apache.zeppelin.realm.LdapGroupRealm
# search base for ldap groups (only relevant for LdapGroupRealm):
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc=COM
ldapRealm.contextFactory.url = ldap://ldap.test.com:389
ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COM
ldapRealm.contextFactory.authenticationMechanism = simple
```
The other more flexible option is to use the LdapRealm. It allows for mapping of ldapgroups to roles and also allows for
role/group based authentication into the zeppelin server. Sample configuration for this realm is given below.
```
[main]
ldapRealm=org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.authenticationMechanism=simple
ldapRealm.contextFactory.url=ldap://localhost:33389
ldapRealm.userDnTemplate=uid={0},ou=people,dc=hadoop,dc=apache,dc=org
# Ability to set ldap paging Size if needed default is 100
ldapRealm.pagingSize = 200
ldapRealm.authorizationEnabled=true
ldapRealm.contextFactory.systemAuthenticationMechanism=simple
ldapRealm.searchBase=dc=hadoop,dc=apache,dc=org
ldapRealm.userSearchBase = dc=hadoop,dc=apache,dc=org
ldapRealm.groupSearchBase = ou=groups,dc=hadoop,dc=apache,dc=org
ldapRealm.groupObjectClass=groupofnames
# Allow userSearchAttribute to be customized
ldapRealm.userSearchAttributeName = sAMAccountName
ldapRealm.memberAttribute=member
# force usernames returned from ldap to lowercase useful for AD
ldapRealm.userLowerCase = true
# ability set searchScopes subtree (default), one, base
ldapRealm.userSearchScope = subtree;
ldapRealm.groupSearchScope = subtree;
ldapRealm.memberAttributeValueTemplate=cn={0},ou=people,dc=hadoop,dc=apache,dc=org
ldapRealm.contextFactory.systemUsername=uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
ldapRealm.contextFactory.systemPassword=S{ALIAS=ldcSystemPassword}
# enable support for nested groups using the LDAP_MATCHING_RULE_IN_CHAIN operator
ldapRealm.groupSearchEnableMatchingRuleInChain = true
# optional mapping from physical groups to logical application roles
ldapRealm.rolesByGroup = LDN_USERS: user_role, NYK_USERS: user_role, HKG_USERS: user_role, GLOBAL_ADMIN: admin_role
# optional list of roles that are allowed to authenticate. Incase not present all groups are allowed to authenticate (login).
# This changes nothing for url specific permissions that will continue to work as specified in [urls].
ldapRealm.allowedRolesForAuthentication = admin_role,user_role
ldapRealm.permissionsByRole= user_role = *:ToDoItemsJdo:*:*, *:ToDoItem:*:*; admin_role = *
securityManager.sessionManager = $sessionManager
securityManager.realms = $ldapRealm
```
Also instead of specifying systemPassword in clear text in `shiro.ini` administrator can choose to specify the same in "hadoop credential".
Create a keystore file using the hadoop credential command line:
```
hadoop credential create ldapRealm.systemPassword -provider jceks://file/user/zeppelin/conf/zeppelin.jceks
```
Add the following line in the `shiro.ini` file:
```
ldapRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceks
```
**Caution** due to a bug in LDAPRealm only ```ldapRealm.pagingSize``` results will be fetched from LDAP. In big directory Trees this may cause missing Roles. Try limiting the search Scope using ```ldapRealm.groupSearchBase``` or narrow down the required Groups using ```ldapRealm.groupSearchFilter```
### PAM
[PAM](https://en.wikipedia.org/wiki/Pluggable_authentication_module) authentication support allows the reuse of existing authentication
modules on the host where Zeppelin is running. On a typical system modules are configured per service for example sshd, passwd, etc. under `/etc/pam.d/`. You can
either reuse one of these services or create your own for Zeppelin. Activating PAM authentication requires two parameters:
1. realm: The Shiro realm being used
2. service: The service configured under `/etc/pam.d/` to be used. The name here needs to be the same as the file name under `/etc/pam.d/`
```
[main]
pamRealm=org.apache.zeppelin.realm.PamRealm
pamRealm.service=sshd
```
### Knox SSO
[KnoxSSO](https://knox.apache.org/books/knox-0-13-0/dev-guide.html#KnoxSSO+Integration) provides an abstraction for integrating any number of authentication systems and SSO solutions and enables participating web applications to scale to those solutions more easily. Without the token exchange capabilities offered by KnoxSSO each component UI would need to integrate with each desired solution on its own.
When Knox SSO is enabled for Zeppelin, the [Apache Hadoop Groups Mapping](https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/GroupsMapping.html) configuration will used internally to determine the group memberships of the user who is trying to log in. Role-based access permission can be set based on groups as seen by Hadoop.
To enable this, apply the following change in `conf/shiro.ini` under `[main]` section.
```
### A sample for configuring Knox JWT Realm
knoxJwtRealm = org.apache.zeppelin.realm.jwt.KnoxJwtRealm
## Domain of Knox SSO
knoxJwtRealm.providerUrl = https://domain.example.com/
## Url for login
knoxJwtRealm.login = gateway/knoxsso/knoxauth/login.html
## Url for logout
knoxJwtRealm.logout = gateway/knoxssout/api/v1/webssout
knoxJwtRealm.redirectParam = originalUrl
knoxJwtRealm.cookieName = hadoop-jwt
knoxJwtRealm.publicKeyPath = /etc/zeppelin/conf/knox-sso.pem
# This is required if KNOX SSO is enabled, to check if "knoxJwtRealm.cookieName" cookie was expired/deleted.
authc = org.apache.zeppelin.realm.jwt.KnoxAuthenticationFilter
```
### HTTP SPNEGO Authentication
HTTP SPNEGO (Simple and Protected GSS-API NEGOtiation) is the standard way to support Kerberos Ticket based user authentication for Web Services. Based on [Apache Hadoop Auth](https://hadoop.apache.org/docs/current/hadoop-auth/index.html), Zeppelin supports ability to authenticate users by accepting and validating their Kerberos Ticket.
When HTTP SPNEGO Authentication is enabled for Zeppelin, the [Apache Hadoop Groups Mapping](https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/GroupsMapping.html) configuration will used internally to determine the group memberships of the user who is trying to log in. Role-based access permission can be set based on groups as seen by Hadoop.
To enable this, apply the following change in `conf/shiro.ini` under `[main]` section.
```
krbRealm = org.apache.zeppelin.realm.kerberos.KerberosRealm
krbRealm.principal=HTTP/[email protected]
krbRealm.keytab=/etc/security/keytabs/spnego.service.keytab
krbRealm.nameRules=DEFAULT
krbRealm.signatureSecretFile=/etc/security/http_secret
krbRealm.tokenValidity=36000
krbRealm.cookieDomain=domain.com
krbRealm.cookiePath=/
authc = org.apache.zeppelin.realm.kerberos.KerberosAuthenticationFilter
```
For above configuration to work, user need to do some more configurations outside Zeppelin.
1. A valid SPNEGO keytab should be available on the Zeppelin node and should be readable by 'zeppelin' user. If there is a SPNEGO keytab already available (because of another Hadoop service), it can be reused here without generating a new keytab.
An example of working SPNEGO keytab could be:
```
$ klist -kt /etc/security/keytabs/spnego.service.keytab
Keytab name: FILE:/etc/security/keytabs/spnego.service.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 11/26/2018 16:58:38 HTTP/[email protected]
2 11/26/2018 16:58:38 HTTP/[email protected]
2 11/26/2018 16:58:38 HTTP/[email protected]
2 11/26/2018 16:58:38 HTTP/[email protected]
```
Ensure that the keytab premissions are sufficiently strict while still readable by the 'zeppelin' user:
```
$ ls -l /etc/security/keytabs/spnego.service.keytab
-r--r-----. 1 root hadoop 346 Nov 26 16:58 /etc/security/keytabs/spnego.service.keytab
```
Note that for the above example, the 'zeppelin' user can read the keytab because they are a member of the 'hadoop' group.
2. A secret signature file must be present on Zeppelin node, readable by 'zeppelin' user. This file contains the random binary numbers which is used to sign 'hadoop.auth' cookie, generated during SPNEGO exchange. If such a file is already generated and available on the Zeppelin node, it should be used rather than generating a new file.
Commands to generate a secret signature file (if required):
```
dd if=/dev/urandom of=/etc/security/http_secret bs=1024 count=1
chown hdfs:hadoop /etc/security/http_secret
chmod 440 /etc/security/http_secret
```
## Secure Cookie for Zeppelin Sessions (optional)
Zeppelin can be configured to set `HttpOnly` flag in the session cookie. With this configuration, Zeppelin cookies can
not be accessed via client side scripts thus preventing majority of Cross-site scripting (XSS) attacks.
To enable secure cookie support via Shiro, add the following lines in `conf/shiro.ini` under `[main]` section, after
defining a `sessionManager`.
```
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = JSESSIONID
cookie.secure = true
cookie.httpOnly = true
sessionManager.sessionIdCookie = $cookie
```
## Secure your Zeppelin information (optional)
By default, anyone who defined in `[users]` can share **Interpreter Setting**, **Credential** and **Configuration** information in Apache Zeppelin.
Sometimes you might want to hide these information for your use case.
Since Shiro provides **url-based security**, you can hide the information by commenting or uncommenting these below lines in `conf/shiro.ini`.
```
[urls]
/api/interpreter/** = authc, roles[admin]
/api/configurations/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
```
In this case, only who have `admin` role can see **Interpreter Setting**, **Credential** and **Configuration** information.
If you want to grant this permission to other users, you can change **roles[ ]** as you defined at `[users]` section.
### Apply multiple roles in Shiro configuration
By default, Shiro will allow access to a URL if only user is part of "**all the roles**" defined like this:
```
[urls]
/api/interpreter/** = authc, roles[admin, role1]
```
### Apply multiple roles or user in Shiro configuration
If there is a need that user with "**any of the defined roles or user itself**" should be allowed, then following Shiro configuration can be used:
```
[main]
anyofrolesuser = org.apache.zeppelin.utils.AnyOfRolesUserAuthorizationFilter
[urls]
/api/interpreter/** = authc, anyofrolesuser[admin, user1]
/api/configurations/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
```
<br/>
> **NOTE :** All of the above configurations are defined in the `conf/shiro.ini` file.
## FAQ
Zeppelin sever is configured as form-based authentication but is behind proxy configured as basic-authentication for example [NGINX](./authentication_nginx.html#http-basic-authentication-using-nginx) and don't want Zeppelin-Server to clear authentication headers.
> Set `zeppelin.server.authorization.header.clear` to `false` in zeppelin-site.xml
## Other authentication methods
- [HTTP Basic Authentication using NGINX](./authentication_nginx.html)
| 49.941011 | 681 | 0.788965 | eng_Latn | 0.917385 |
f4878445d2372e493f92583f4a20f5e455ab19c2 | 821 | md | Markdown | doc/csCrypto_ko.md | sgtcloud/cocostyle | aea58ea32574a2a155713ae9094837d4e9ca1e98 | [
"MIT"
] | 24 | 2015-02-04T07:37:28.000Z | 2019-10-28T05:40:00.000Z | doc/csCrypto_ko.md | sincntx/cocostyle | aea58ea32574a2a155713ae9094837d4e9ca1e98 | [
"MIT"
] | null | null | null | doc/csCrypto_ko.md | sincntx/cocostyle | aea58ea32574a2a155713ae9094837d4e9ca1e98 | [
"MIT"
] | 13 | 2015-06-05T18:30:03.000Z | 2019-01-20T18:47:55.000Z | csCrypto
=========
### MD5
```
var str = "abc";
str = csCrypto.md5(str);
```
### SHA1
```
var str = "abc";
str = csCrypto.sha1(str);
```
### SHA256
```
var str = "abc";
str = csCrypto.sha256("hex", str);
str = csCrypto.sha256("dec", str);
str = csCrypto.sha256("bin", str);
```
### AES
```
var aes = new AES({block_mode:CBC, iv: 'asdfasdfasdfasdf', key: convert.hex_to_binstring('0123456712345678234567893456789a0123456712345678234567893456789a')});
var data = "rank2";
var encrypted = convert.binstring_to_hex(aes.encrypt(data));
```
### 클래스 세부사항
- `{String} csCrypto.md5(String string)`
- `{String} csCrypto.sha1(String string)`
- `{String} csCrypto.sha256(String mode(hex, bin, dec), String string)`
- 대부분의 AES 코드는 [cowcrypt](http://rubbingalcoholic.github.io/cowcrypt/api/AES.html)를 참조했습니다.
| 20.02439 | 166 | 0.65408 | kor_Hang | 0.429482 |
f487866dc1eefe0519072218d61c30d72a170da3 | 4,498 | md | Markdown | articles/azure-monitor/app/java-collectd.md | grayknight2/mc-docs.zh-cn | dc705774cac09f2b3eaeec3c0ecc17148604133e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/app/java-collectd.md | grayknight2/mc-docs.zh-cn | dc705774cac09f2b3eaeec3c0ecc17148604133e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/app/java-collectd.md | grayknight2/mc-docs.zh-cn | dc705774cac09f2b3eaeec3c0ecc17148604133e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 监视 Linux 上 Java Web 应用的性能 - Azure | Azure Docs
description: 通过 Application Insights 的 CollectD 插件监视 Java 网站的扩展应用程序性能。
ms.topic: conceptual
author: Johnnytechn
origin.date: 03/14/2019
ms.date: 05/28/2020
ms.author: v-johya
ms.openlocfilehash: fc5ebe00842b5278569254d1d95faa3552df9928
ms.sourcegitcommit: be0a8e909fbce6b1b09699a721268f2fc7eb89de
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/29/2020
ms.locfileid: "84199702"
---
# <a name="collectd-linux-performance-metrics-in-application-insights"></a>collectd:Application Insights 中的 Linux 性能指标
若要浏览 [Application Insights](../../azure-monitor/app/app-insights-overview.md) 中 Linux 系统性能指标,请安装 [collectd](https://collectd.org/) 及其 Application Insights 插件。 此开放源解决方案收集了各种系统和网络统计信息。
如果已[使用 Application Insights 检测了 Java Web 服务][java],则通常会使用 collectd。 它可提供更多数据,有助于增强应用性能或诊断问题。
## <a name="get-your-instrumentation-key"></a>获取检测密钥
在 [Azure 门户](https://portal.azure.cn)中,打开要显示数据的 [Application Insights](../../azure-monitor/app/app-insights-overview.md) 资源。 (或[创建新资源](../../azure-monitor/app/create-new-resource.md )。)
复制可标识资源的检测密钥。
![浏览全部,打开资源,并在“概要”下拉菜单中选择并复制该检测密钥](./media/java-collectd/instrumentation-key-001.png)
## <a name="install-collectd-and-the-plug-in"></a>安装 collectd 和插件
在 Linux 服务器计算机上:
1. 安装 [collectd](https://collectd.org/) 版本 5.4.0 或更高版本。
2. 下载 [Application Insights collectd 编写器插件](https://github.com/microsoft/ApplicationInsights-Java/tree/master/collectd/src/main/java/com/microsoft/applicationinsights/collectd/internal)。 注意版本号。
3. 将插件 JAR 复制到 `/usr/share/collectd/java`。
4. 编辑 `/etc/collectd/collectd.conf`:
* 确保已启用 [Java 插件](https://collectd.org/wiki/index.php/Plugin:Java)。
* 更新 java.class.path 的 JVMArg,以包括以下 JAR。 更新版本号,以匹配下载版本:
* `/usr/share/collectd/java/applicationinsights-collectd-1.0.5.jar`
* 使用资源中的检测密钥添加此代码段:
```XML
LoadPlugin "com.microsoft.applicationinsights.collectd.ApplicationInsightsWriter"
<Plugin ApplicationInsightsWriter>
InstrumentationKey "Your key"
</Plugin>
```
以下是示例配置文件的一部分:
```XML
...
# collectd plugins
LoadPlugin cpu
LoadPlugin disk
LoadPlugin load
...
# Enable Java Plugin
LoadPlugin "java"
# Configure Java Plugin
<Plugin "java">
JVMArg "-verbose:jni"
JVMArg "-Djava.class.path=/usr/share/collectd/java/applicationinsights-collectd-1.0.5.jar:/usr/share/collectd/java/collectd-api.jar"
# Enabling Application Insights plugin
LoadPlugin "com.microsoft.applicationinsights.collectd.ApplicationInsightsWriter"
# Configuring Application Insights plugin
<Plugin ApplicationInsightsWriter>
InstrumentationKey "12345678-1234-1234-1234-123456781234"
</Plugin>
# Other plugin configurations ...
...
</Plugin>
...
```
配置其他 [collectd 插件](https://collectd.org/wiki/index.php/Table_of_Plugins),其可从不同源中收集各种数据。
根据其[手册](https://collectd.org/wiki/index.php/First_steps)重启 collectd。
## <a name="view-the-data-in-application-insights"></a>查看 Application Insights 中的数据
在 Application Insights 资源中,打开[指标和添加图表][metrics],选择要从“自定义”类别查看的指标。
默认情况下,会从收集指标的所有主机中聚合指标。 要查看每个主机的指标,在“图表”详细信息边栏选项卡中,打开“分组”,并选择按 CollectD-Host 分组。
## <a name="to-exclude-upload-of-specific-statistics"></a>排除特定统计信息的上传
默认情况下,Application Insights 插件会发送由启用的所有 collectd“读取”插件收集的所有数据。
排除特定插件或数据源中的数据:
* 编辑配置文件。
* 在 `<Plugin ApplicationInsightsWriter>` 中,添加如下指令行:
| 指令 | 效果 |
| --- | --- |
| `Exclude disk` |排除由 `disk` 插件收集的所有数据 |
| `Exclude disk:read,write` |排除 `disk` 插件中名为 `read` 和 `write` 的源。 |
使用新行分隔指令。
## <a name="problems"></a>遇到问题?
*在门户中看不到数据*
* 打开[搜索][diagnostic],查看原始事件是否已到达。 有时需较长时间才能在指标资源管理器中看到数据。
* 可能需要[为传出数据设置防火墙例外](../../azure-monitor/app/ip-addresses.md)
* 在 Application Insights 插件中启用跟踪。 在 `<Plugin ApplicationInsightsWriter>` 中添加此行:
* `SDKLogger true`
* 打开终端并在详细模式下启动 collectd,查看其报告的任何问题:
* `sudo collectd -f`
## <a name="known-issue"></a>已知问题
Application Insights 写入插件与某些读取插件不兼容。 Application Insights 插件需要浮点数时,有些插件有时会发送“NaN”。
症状:collectd 日志显示包括“AI: ...SyntaxError:意外的令牌 N”的错误。
解决方法:排除由问题写入插件收集的数据。
<!--Link references-->
[api]: ../../azure-monitor/app/api-custom-events-metrics.md
[apiexceptions]: ../../azure-monitor/app/api-custom-events-metrics.md#track-exception
[availability]: ../../azure-monitor/app/monitor-web-app-availability.md
[diagnostic]: ../../azure-monitor/app/diagnostic-search.md
[java]: java-get-started.md
[javalogs]: java-trace-logs.md
[metrics]: ../../azure-monitor/platform/metrics-charts.md
| 32.832117 | 193 | 0.73566 | yue_Hant | 0.54804 |
f487901d7d40ad17a3c0a001d5b953e3d73614b9 | 7,800 | md | Markdown | README.md | omry/When-ML-pipeline-meets-Hydra | bd7ca84ec9b3fc5c4065c97fe5f0b2567fb77d1c | [
"MIT"
] | null | null | null | README.md | omry/When-ML-pipeline-meets-Hydra | bd7ca84ec9b3fc5c4065c97fe5f0b2567fb77d1c | [
"MIT"
] | null | null | null | README.md | omry/When-ML-pipeline-meets-Hydra | bd7ca84ec9b3fc5c4065c97fe5f0b2567fb77d1c | [
"MIT"
] | null | null | null | # What happens when ML pipeline meets Hydra?
[Hydra](https://github.com/facebookresearch/hydra) is a handy and powerful tool that can dramatically reduce our boilerplate codes and combine dynamically various configurations. I started out with the idea that `Hydra` could be used in **ML Pipeline** as well, and this is a Python app in the form of a template that I quickly implemented. Feedback is always welcome.
## Command Architecture
This app has a two-level command architecture. ([c](https://github.com/withsmilo/When-ML-pipeline-meets-Hydra/tree/master/src/when_ml_pipeline_meets_hydra/config/c)) The command line arguments for executing each command are as follows:
```
├── preprocessing
│ ├── foo -> c=preprocessing c/preprocessing_sub=foo
│ └── bar -> c=preprocessing c/preprocessing_sub=bar
├── modeling
│ ├── foo -> c=modeling c/modeling_sub=foo
│ └── bar -> c=modeling c/modeling_sub=bar
├── deployment
│ ├── foo -> c=deployment c/deployment_sub=foo
│ └── bar -> c=deployment c/deployment_sub=foo
└── help -> c=help
```
## Prepared Configuration
Here are the configurations prepared for this app. ([preprocessing](https://github.com/withsmilo/When-ML-pipeline-meets-Hydra/tree/master/src/when_ml_pipeline_meets_hydra/config/preprocessing), [modeling](https://github.com/withsmilo/When-ML-pipeline-meets-Hydra/tree/master/src/when_ml_pipeline_meets_hydra/config/modeling/model), [deployment](https://github.com/withsmilo/When-ML-pipeline-meets-Hydra/tree/master/src/when_ml_pipeline_meets_hydra/config/deployment)) The command line arguments for using each configuration are as follows:
```
├── preprocessing
│ ├── dataset
│ │ ├── dataset_1.yaml -> preprocessing/dataset=dataset_1.yaml
│ │ └── dataset_2.yaml -> preprocessing/dataset=dataset_2.yaml
│ └── param
│ ├── param_1.yaml -> preprocessing/param=param_1.yaml
│ └── param_2.yaml -> preprocessing/param=param_2.yaml
├── modeling
│ ├── model
│ │ ├── model_1.yaml -> modeling/model=model_1.yaml
│ │ └── model_1.yaml -> modeling/model=model_2.yaml
│ └── param
│ ├── param_1.yaml -> modeling/param=param_1.yaml
│ └── param_2.yaml -> modeling/param=param_2.yaml
└── deployment
└── cluster
├── cluster_1.yaml -> deployment/cluster=cluster_1.yaml
└── cluster_1.yaml -> deployment/cluster=cluster_2.yaml
```
## How to Install
```
# Create a new Anaconda environment if needed.
$ conda create --name when_ml_pipeline_meets_hydra python=3.6 -y
$ conda activate when_ml_pipeline_meets_hydra
# Clone this repo.
$ git clone https://github.com/withsmilo/When-ML-pipeline-meets-Hydra.git
$ cd When-ML-pipeline-meets-Hydra
# Install this app.
$ python setup.py develop
$ when_ml_pipeline_meets_hydra --help
```
## ML Pipeline Test
First of all, I will construct a new ML pipeline dynamically using all `*_1.yaml` configurations and executing the same `foo` subcommand per each step. The command you need is simple and structured.
```
$ when_ml_pipeline_meets_hydra \
preprocessing/dataset=dataset_1.yaml \
preprocessing/param=param_1.yaml \
modeling/model=model_1.yaml \
modeling/param=param_1.yaml \
deployment/cluster=cluster_1.yaml \
c/preprocessing_sub=foo \
c/modeling_sub=foo \
c/deployment_sub=foo \
c=preprocessing,modeling,deployment \
--multirun
```
```
[2019-10-13 02:05:17,597] - Launching 3 jobs locally
[2019-10-13 02:05:17,597] - Sweep output dir : .multirun/2019-10-13
[2019-10-13 02:05:17,597] - #0 : preprocessing/dataset=dataset_1.yaml preprocessing/param=param_1.yaml modeling/model=model_1.yaml modeling/param=param_1.yaml deployment/cluster=cluster_1.yaml c/preprocessing_sub=foo c/modeling_sub=foo c/deployment_sub=foo c=preprocessing
========== Run preprocessing's 'foo' subcommand ==========
dataset_info:
{
"name": "dataset_1",
"path": "/path/of/dataset/1"
}
param_info:
{
"name": "param_1",
"output_path": "/path/of/output/path/1",
"key_1_1": "value_1_1",
"key_1_2": "value_1_2"
}
Do something here!
[2019-10-13 02:05:17,741] - #1 : preprocessing/dataset=dataset_1.yaml preprocessing/param=param_1.yaml modeling/model=model_1.yaml modeling/param=param_1.yaml deployment/cluster=cluster_1.yaml c/preprocessing_sub=foo c/modeling_sub=foo c/deployment_sub=foo c=modeling
========== Run modeling's 'foo' subcommand ==========
model_info:
{
"name": "model_1",
"input_path": "/path/of/input/path/1",
"output_path": "/path/of/output/path/1"
}
param_info:
{
"name": "param_1",
"hyperparam_key_1_1": "hyperparam_value_1_1",
"hyperparam_key_1_2": "hyperparam_value_1_2"
}
Do something here!
[2019-10-13 02:05:17,878] - #2 : preprocessing/dataset=dataset_1.yaml preprocessing/param=param_1.yaml modeling/model=model_1.yaml modeling/param=param_1.yaml deployment/cluster=cluster_1.yaml c/preprocessing_sub=foo c/modeling_sub=foo c/deployment_sub=foo c=deployment
========== Run deployment's 'foo' subcommand ==========
cluster_info:
{
"name": "cluster_1",
"url": "https://cluster/1/url",
"id": "user_1",
"pw": "pw_1"
}
Do something here!
```
After then, if you'd like to deploy a model that has only changed the hyperparameter settings to another cluster, you can simply change `modeling/param` to `param_2.yaml` and `deployment/cluster` to `cluster_2.yaml` before executing your command. That's it!
```
$ when_ml_pipeline_meets_hydra \
preprocessing/dataset=dataset_1.yaml \
preprocessing/param=param_1.yaml \
modeling/model=model_1.yaml \
modeling/param=param_2.yaml \
deployment/cluster=cluster_2.yaml \
c/preprocessing_sub=foo \
c/modeling_sub=foo \
c/deployment_sub=foo \
c=preprocessing,modeling,deployment \
--multirun
```
```
[2019-10-13 02:11:17,319] - Launching 3 jobs locally
[2019-10-13 02:11:17,319] - Sweep output dir : .multirun/2019-10-13
[2019-10-13 02:11:17,319] - #0 : preprocessing/dataset=dataset_1.yaml preprocessing/param=param_1.yaml modeling/model=model_1.yaml modeling/param=param_2.yaml deployment/cluster=cluster_2.yaml c/preprocessing_sub=foo c/modeling_sub=foo c/deployment_sub=foo c=preprocessing
========== Run preprocessing's 'foo' subcommand ==========
dataset_info:
{
"name": "dataset_1",
"path": "/path/of/dataset/1"
}
param_info:
{
"name": "param_1",
"output_path": "/path/of/output/path/1",
"key_1_1": "value_1_1",
"key_1_2": "value_1_2"
}
Do something here!
[2019-10-13 02:11:17,459] - #1 : preprocessing/dataset=dataset_1.yaml preprocessing/param=param_1.yaml modeling/model=model_1.yaml modeling/param=param_2.yaml deployment/cluster=cluster_2.yaml c/preprocessing_sub=foo c/modeling_sub=foo c/deployment_sub=foo c=modeling
========== Run modeling's 'foo' subcommand ==========
model_info:
{
"name": "model_1",
"input_path": "/path/of/input/path/1",
"output_path": "/path/of/output/path/1"
}
param_info:
{
"name": "param_2",
"hyperparam_key_2_1": "hyperparam_value_2_1",
"hyperparam_key_2_2": "hyperparam_value_2_2"
}
Do something here!
[2019-10-13 02:11:17,588] - #2 : preprocessing/dataset=dataset_1.yaml preprocessing/param=param_1.yaml modeling/model=model_1.yaml modeling/param=param_2.yaml deployment/cluster=cluster_2.yaml c/preprocessing_sub=foo c/modeling_sub=foo c/deployment_sub=foo c=deployment
========== Run deployment's 'foo' subcommand ==========
cluster_info:
{
"name": "cluster_2",
"url": "https://cluster/2/url",
"id": "user_2",
"pw": "pw_2"
}
Do something here!
```
As well as this scenario, you can think of various cases.
## Note
This project has been set up using PyScaffold 3.2.2. For details and usage information on PyScaffold see https://pyscaffold.org/.
## License
This app is licensed under [MIT License](https://github.com/withsmilo/When-ML-pipeline-meets-Hydra/blob/master/LICENSE.txt).
| 41.052632 | 540 | 0.724231 | eng_Latn | 0.412135 |
f487ec9455771c53bf1a22cb4b1d767acaa4ca8a | 20,468 | md | Markdown | CHANGELOG.md | Wicaeed/haproxy | a21dba442ba7732488925086e7b7eec29346d774 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | Wicaeed/haproxy | a21dba442ba7732488925086e7b7eec29346d774 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | Wicaeed/haproxy | a21dba442ba7732488925086e7b7eec29346d774 | [
"Apache-2.0"
] | null | null | null | # haproxy Cookbook CHANGELOG
This file is used to list changes made in each version of the haproxy cookbook.
## Unreleased
## 12.1.0 - *2021-06-14*
- Add `ssl_lib` and `ssl_inc` properties to `haproxy_install` to support openssl - [@derekgroh](https://github.com/derekgroh)
## 12.0.1 - *2021-06-01*
## 12.0.0 - *2021-05-13*
- Refactor to use resource partials
- Add delete action to most resources
- Convert `install` resource boolean strings to true/false
- Ensure section is created before adding an ACL
## 11.0.0 - *2021-05-07*
- Drop testing for Debian 8, Ubuntu 16.04 & Ubuntu 18.04
- Add testing for Debian 9 Ubuntu 20.04 & Ubuntu 21.04
- Fix the minimum Chef version to 15.3
unified_mode was introduced in 15.3
- Change kitchen to use the Bento provided Amazonlinux2 image
- Fix test suite
## 10.0.1 - *2021-04-26*
- Add missing configuration file properties to all resources
## 10.0.0 - *2021-04-24*
- Add configuration test function to the service resource - [@bmhughes](https://github.com/bmhughes)
- Fix generating multiple actions from the service resource - [@bmhughes](https://github.com/bmhughes)
- Kitchen test with CentOS 8/8 stream - [@bmhughes](https://github.com/bmhughes)
- Fix IUS repo causing a run failure on an unsupported platform - [@bmhughes](https://github.com/bmhughes)
- Move configuration resource creation into resource helper module - [@bmhughes](https://github.com/bmhughes)
## [v9.1.0] (2020-10-07)
### Added
- testing for haproxy 2.2
### Removed
- testing for haproxy 1.9 & 2.1
## [v9.0.1] (2020-09-15)
### Added
- added lua compilation flags to `haproxy_install` resource
### Fixed
- resolved cookstyle error: libraries/helpers.rb:19:24 refactor: `ChefCorrectness/InvalidPlatformFamilyInCase`
- Updated IUS repo url to `https://repo.ius.io/ius-release-el7.rpm`
### Changed
- Turn on unified_mode for all resources
## [v9.0.0] (2020-02-21)
### Changed
- Removed `depends_on` build-essential, as this is now in Chef Core
### Fixed
- Cookstyle fixes for cookstyle version 5.20
## [v8.3.0] (2020-01-09)
### Added
- on `haproxy_install` epel is now a configurable option
### Changed
- Migrated testing to github actions
### Fixed
- ius repo will only echo out if enabled
## [v8.2.0] (2019-12-23)
### Added
- `fastcgi` resource to support FastCGI applications
### Changed
- Default source install version is haproxy 2.1
### Fixed
- Bug with single newline between resources when multiple of the same type are defined
### Removed
- `.foodcritic` as it is no longer run by deliver local.
- `.rubocop.yml` as no longer required.
## [v8.1.1] (2019-10-02)
### Changed
- Updated `config_defaults` resourcce `stats` property default value to empty hash.
- Updated metadata.rb chef_version to >=13.9 due to resource `description`.
## [v8.1.0] (2019-06-24)
### Changed
- Updated build target to linux-glibc for haproxy 2.0 compatibility.
- Updated integration tests to cover haproxy 2.0.
- Moved install resource target_os check to libraries.
## [v8.0.0] (2019-05-29)
### Added
- The bind config hash joins with a space instead of a colon.
- The peer resource.
- The mailer resource.
## [v7.1.0] (2019-04-16)
### Changed
- Clean up unused templates and files.
### Fixed
- Name conflict with systemd_unit in service resource.
## [v7.0.0] (2019-04-10)
### Added
- `health` to allowed values for `mode` on `frontend`, `backend`, `listen`, `default`.
- `apt-update` for debian platforms.
- ius repo for CentOS and Redhat package installations (resolves #348).
### Changed
- Clean up unit and integration test content regular expressions.
- Move system command to a helper.
- Support only systemd init systems.
### Removed
- Remove `poise_service` dependency in favor of systemd_unit.
### Fixed
- Fix cookbook default value in `config_global`.
## [v6.4.0] (2019-03-20)
### Changed
- Move resource documentation to dedicated folder with md per resource.
- Rename haproxy_cache `name` property as `cache_name`.
### Fixed
- Source installs on CentOS 6.
## [v6.3.0] (2019-02-18)
### Added
- Haproxy_cache resource for caching small objects with HAProxy version >=1.8.
### Changed
- Expand integration test coverage to all stable and LTS HAProxy versions.
- Documentation - clarify extra_options hash string => array option.
- Clarify the supported platforms - add AmazonLinux 2, remove fedora & freebsd.
## [v6.2.7] (2019-01-10)
### Added
- Test for appropriate spacing from start of line and end of line.
- `hash_type` param to `haproxy_backend`, `haproxy_listen`, and `haproxy_config_defaults` resources.
- `reqirep` and `reqrep` params to `haproxy_backend`, `haproxy_frontend`, and `haproxy_listen` resources.
- `sensitive` param to `haproxy_install`; set to false to show diff output during Chef run.
### Changed
- Allow passing an array to `haproxy_listen`'s `http_request` param.
### Fixed
- Fix ordering for `haproxy_listen`: `acl` directives should be applied before `http-request`.
## [v6.2.6] (2018-11-05)
### Changed
- Put `http_request` rules before the `use_backend`.
## [v6.2.5] (2018-10-09)
### Added
- rspec examples for resource usage.
### Removed
- Chef-12 support.
- CPU cookbook dependency.
### Fixed
- Systemd wrapper, the wrapper is no longer included with haproxy versions greater than 1.8.
## [v6.2.4] (2018-09-19)
### Added
- Server property to listen resource and config template.
## [v6.2.3] (2018-08-03)
### Removed
- A few resource default values so they can be specified in the haproxy.cfg default section and added service reload exmample to the readme for config changes.
## [v6.2.2] (2018-08-03)
### Changed
- Made `haproxy_install` `source_url` property dynamic with `source_version` property and removed the need to specify checksum #307.
## [v6.2.1] (2018-08-01)
### Added
- Compiling from source crypt support #305.
## [v6.2.0] (2018-05-11)
### Changed
- Require Chef 12.20 or later.
- Uses the build_essential resource not the default recipe so the cookbook can be skipped entirely if running on Chef 14+.
## [v6.1.0] (2018-04-12)
### **Breaking changes**
### Added
- `haproxy_service` resource see test suites for usage.
- Support for haproxy 1.8.
- Test haproxy version 1.8.7 and 1.7.8.
- Test on chef-client version 13.87 and 14.
- Notes on how we generate the travis.yml list.
### Changed
- Require Chef 12.20 or later.
- Uses the build_essential resource not the default recipe so the cookbook can be skipped entirely if running on Chef 14+.
- Simplify the kitchen matrix.
- Use default action in tests (:create).
- Set the use_systemd property from the init package system.
- Adding in systemd for SUSE Linux.
### Removed
- `kitchen.dokken.yml` suites and inherit from kitchen.yml.
- Amazon tests until a new dokken image is produced that is reliable.
### Fixed
- Source comparison.
## [v6.0.0] (2018-03-28)
### Removed
- `compat_resource` cookbok dependency and push the required Chef version to 12.20
## [v5.0.4] (2018-03-28)
### Changed
- Make 1.8.4 the default installed version (#279)
- Use dokken docker images
- Update tests for haproxy service
- tcplog is now a valid input for the `haproxy_config_defaults` resource (#284)
- bin prefix is now reflected in the service config. (#288, #289)
## [v5.0.3] (2018-02-02)
### Fixed
- `foodcritic` warning for not defining `name_property`.
## [v5.0.2] (2017-11-29)
### Fixed
- Typo in listen section, makes previously unprintable expressions, printable in http-request, http-response and `default_backend`.
## [v5.0.1] (2017-08-10)
### Removed
- useless blank space in generated config file haproxy.cfg
## [v5.0.0] (2017-08-07)
### Added
- Option for install only #251.
### Changed
- Updating service to use cookbook template.
- updating to haproxy 1.7.8, updating `source_version` in test files(kitchen,cookbook, etc)
- updating properties to use `new_resource`
### Fixed
- `log` `property` in `global` resource can now be of type `Array` or `String`. This fixes #252
- fixing supports line #258
## [v4.6.1] (2017-08-02)
### Changed
- Reload instead of restart on config change
- Specify -sf argument last to support haproxy < 1.6.0
## [v4.6.0] (2017-07-13)
### Added
- `conf_template_source`
- `conf_cookbook`
- Support Array value for `extra_options` entries. (#245, #246)
## [v4.5.0] (2017-06-29)
### Added
- `resolver` resource (#240)
## [v4.4.0] (2017-06-28)
### Added
- `option` as an Array `property` for `backend` resource. This fixes #234
- Synced Debian/Ubuntu init script with latest upstream package changes
## [v4.3.1] (2017-06-13)
### Added
- Oracle Linux 6 support
### Removed
- Scientific linux support as we don't have a reliable image
## [v4.3.0] (2017-05-31)
### Added
- Chefspec Matchers for the resources defined in this cookbook.
- `mode` property to `backend` and `frontend` resources.
- `maxconn` to `global` resource.
### Removed
- `default_backend` as a required property on the `frontend` resource.
## [v4.2.0] (2017-05-04)
### Added
- In `acl` resource, usage: `test/fixtures/cookbooks/test/recipes/config_acl.rb`
- In `use_backend` resource, usage: `test/fixtures/cookbooks/test/recipes/config_acl.rb`
- `acl` and `use_backend` to `listen` resource.
- Amazon Linux as a supported platform.
### Changed
- Pinned `build-essential`, `>= 8.0.1`
- Pinned `poise-service`, `>= 1.5.1`
- Cleaned up arrays in `templates/default/haproxy.cfg.erb`
### Fixed
- Init script for Amazon Linux.
### BREAKING CHANGES
- This version removes `stats_socket`, `stats_uri` and `stats_timeout` properties from the `haproxy_global` and `haproxy_listen` resources in favour of using a hash to pass configuration options.
## [v4.1.0] (2017-05-01)
### Added
- `userlist` resource, to see usage: `test/fixtures/cookbooks/test/recipes/config_1_userlist.rb`
- chef-search example in: `test/fixtures/cookbooks/test/recipes/config_backend_search.rb`
- Multiple addresses and ports on listener and frontend (#205)
### Changed
- Updating source install test to take node attributes as haproxy.org is slow.
### Fixed
- `haproxy_retries` in `haproxy_config_defaults` resource
## [v4.0.2] (2017-04-21)
### Fixed
- haproxy service start on Ubuntu 14.04 (#199)
- Reload HAProxy when changing configuration (#197)
## [v4.0.1] (2017-04-20)
### Added
- Updating README.md
- Adding compat_resource for chef-12 support
- Improvement when rendering the configuration file (#196)
## [v4.0.0] (2017-04-18)
### COMPATIBILIY WARNING
- This version removes the existing recipes, attributes, and instance provider in favor of the new haproxy_install and haproxy_ configuration resources. Why not just leave them in place? Well unfortunately they were utterly broken for anything other than the most trivial usage. Rather than continue the user pain we've opted to remove them and point users to a more modern installation method. If you need the legacy installation methods simply pin to the 3.0.4 release.
- THIS IS GOING TO BREAK EVERYTHING YOU KNOW AND LOVE
- 12.5 or greater rewrite
- Custom Resource Only, no recipes
## [v3.0.4] (2017-03-29)
### Fixed
- Bug introduced in (#174) (#182)
## [v3.0.3] (2017-03-28)
### Added
- Multiple addresses and ports on listener and frontend (#174)
- Customize logging destination (#178)
### Changed
- Updating to use bats/serverspec (#179)
## [v3.0.2] (2017-03-27)
### Added
- Allow server startup from `app_lb` recipe. (#171)
- Use Delivery instead of Rake
- Make this cookbook compatible with Chef-13, note: `params` option is now `parameters` (#175)
## [v3.0.1] (2017-01-30)
### Added
- Reload haproxy configuration on changes (#152)
- Merging in generic socket conf (#107)
- Updating config to use facilities hash dynamically (#102)
- Adding `tproxy` and splice per (#98)
### Removed
- Members with nil ips from member array. (#79)
## [v3.0.0] (2017-01-24)
### Added
- Configurable debug options
- CentOS7 compatibility (#123)
- Adding poise-service for service management
### Changed
- Updating source install to use Haproxy 1.7.2
- Chef >= 12.1 required
- Use `['haproxy']['source']['target_cpu']` instead of `['haproxy']['source']['target_os']` to detect correct architecture. (#150)
## [v2.0.2] (2016-12-30)
### Fixed
- Cookstyle
- The github URL for the repo in various locations
### Changed
- Travis testing updates
- Converted file modes to strings
- Updated the config resource to lazily evaluate node attribute values to better load the values when overridden in wrapper cookbooks
## v2.0.1 (2016-12-08)
### Fixed
- Dynamic configuration to properly template out frontend and backend sections
### Chnaged
- Update Chef Brigade to Sous Chefs
- Updated contributing docs to remove the reference to the develop branch
## v2.0.0 (2016-11-09)
### Breaking Changes
- The default recipe is now an empty recipe with manual configuration performed in the 'manual' recipe
- Remove Chef 10 compatibility code
- Switch from Librarian to Berksfile
- Updated the source recipe to install 1.6.9 by default
### Added
- Migrated this cookbook from Heavy Water to Chef Brigade so we can ensure more frequent releases and maintenance
- A code of conduct for the project. Read it.
- Several new syslog configuration attributes
- A new attribute for stats_socket_level
- A new attribute for retries
- A chefignore file to speed up syncs from the server
- Scientific and oracle as supported platforms in the metadata
- source_url, issues_url, and chef_version metadata
- Enabled why-run support in the default haproxy resource
- New haproxy_config resource
- Guardfile
- Testing in Travis CI with a Rakefile that runs cookstyle, foodcritic, and ChefSpec as well as a Kitchen Dokken config that does integration testing of the package install
- New node['haproxy']['pool_members'] and node['haproxy']['pool_members_option'] attributes
### Changed
- The haproxy config is now verified before the service restarts / reloads to prevent taking down haproxy with a bad config
- Update the Kitchen config file to use Bento boxes and new platforms
- Update ChefSpec matchers to use the latest format
- Broke search logic out into a new_discovery recipe
### Removed
- Attributes from the metadata file as these are redundant
- Broken tarball validation in the source recipe to prevented installs from completing
### Fixed
- Source installs not running if an older version was present on the node
- Resolved all cookstyle and foodcritic warnings
## v1.6.7
### Added
- ChefSpec matchers and test coverage
### Changed
- Replaced references to Opscode with Chef
## v1.6.6
### Changed
- Parameterize options for admin listener.
- Renamed templates/rhel to templates/redhat.
- Sort pool members by hostname to avoid needless restarts.
- Support amazon linux init script.
- Support to configure global options.
### Fixed
- CPU Tuning, corrects cpu_affinity resource triggers
## v1.6.4
## v1.6.2
### Added
- [COOK-3135](https://tickets.chef.io/browse/COOK-3135) - Allow setting of members with default recipe without changing the template.
### Fixed
- [COOK-3424](https://tickets.chef.io/browse/COOK-3424) - Haproxy cookbook attempts to alter an immutable attribute.
## v1.6.0
### Added
- Allow setting of members with default recipe without changing the template.
## v1.5.0
### Added
- [COOK-3660](https://tickets.chef.io/browse/COOK-3660) - Make haproxy socket default user group configurable
- [COOK-3537](https://tickets.chef.io/browse/COOK-3537) - Add OpenSSL and zlib source configurations
- [COOK-2384](https://tickets.chef.io/browse/COOK-2384) - Add LWRP for multiple haproxy sites/configs
## v1.4.0
### Added
- [COOK-3237](https://tickets.chef.io/browse/COOK-3237) - Enable cookie-based persistence in a backend
- [COOK-3216](https://tickets.chef.io/browse/COOK-3216) - Metadata attributes
- [COOK-3211](https://tickets.chef.io/browse/COOK-3211) - Support RHEL
- [COOK-3133](https://tickets.chef.io/browse/COOK-3133) - Allow configuration of a global stats socket
## v1.3.2
### Fixed
- [COOK-3046]: haproxy default recipe broken by COOK-2656.
### Added
- [COOK-2009]: Test-kitchen support to haproxy.
## v1.3.0
### Changed
- [COOK-2656]: Unify the haproxy.cfg with that from `app_lb`.
### Added
- [COOK-1488]: Provide an option to build haproxy from source.
## v1.2.0
### Added
- [COOK-1936] - use frontend / backend logic.
- [COOK-1937] - cleanup for configurations.
- [COOK-1938] - more flexibility for options.
- [COOK-1939] - reloading haproxy is better than restarting.
- [COOK-1940] - haproxy stats listen on 0.0.0.0 by default.
- [COOK-1944] - improve haproxy performance.
## v1.1.4
### Added
- [COOK-1839] - `httpchk` configuration to `app_lb` template.
## v1.1.0
### Changed
- [COOK-1275] - haproxy-default.erb should be a cookbook_file.
### Fixed
- [COOK-1594] - Template-Service ordering issue in `app_lb` recipe.
## v1.0.6
### Changed
- [COOK-1310] - Redispatch flag has changed.
## v1.0.4
### Changed
- [COOK-806] - Load balancer should include an SSL option.
- [COOK-805] - Fundamental haproxy load balancer options should be configurable.
## v1.0.3
### Changed
- [COOK-620] `haproxy::app_lb`'s template should use the member cloud private IP by default.
## v1.0.2
### Fixed
- Regression introduced in v1.0.1.
## v1.0.1
### Added
- Account for the case where load balancer is in the pool.
## v1.0.0
### Changed
- Use `node.chef_environment` instead of `node['app_environment']`.
[10.0.0 - *2021-04-24*]: https://github.com/sous-chefs/haproxy/compare/v6.4.0...HEAD
[v3.0.0]: https://github.com/sous-chefs/haproxy/compare/v2.0.2...v3.0.0
[v3.0.1]: https://github.com/sous-chefs/haproxy/compare/v3.0.0...v3.0.1
[v3.0.2]: https://github.com/sous-chefs/haproxy/compare/v3.0.1...v3.0.2
[v3.0.3]: https://github.com/sous-chefs/haproxy/compare/v3.0.2...v3.0.3
[v3.0.4]: https://github.com/sous-chefs/haproxy/compare/v3.0.3...v3.0.4
[v4.0.0]: https://github.com/sous-chefs/haproxy/compare/v3.0.4...v4.0.0
[v4.0.1]: https://github.com/sous-chefs/haproxy/compare/v4.0.0...v4.0.1
[v4.0.2]: https://github.com/sous-chefs/haproxy/compare/v4.0.1...v4.0.2
[v4.1.0]: https://github.com/sous-chefs/haproxy/compare/v4.0.2...v4.1.0
[v4.2.0]: https://github.com/sous-chefs/haproxy/compare/v4.1.0...v4.2.0
[v4.3.0]: https://github.com/sous-chefs/haproxy/compare/v4.2.0...v4.3.0
[v4.3.1]: https://github.com/sous-chefs/haproxy/compare/v4.3.0...v4.3.1
[v4.4.0]: https://github.com/sous-chefs/haproxy/compare/v4.3.1...v4.4.0
[v4.5.0]: https://github.com/sous-chefs/haproxy/compare/v4.4.0...v4.5.0
[v4.6.0]: https://github.com/sous-chefs/haproxy/compare/v4.5.0...v4.6.0
[v4.6.1]: https://github.com/sous-chefs/haproxy/compare/v4.6.0...v4.6.1
[v5.0.0]: https://github.com/sous-chefs/haproxy/compare/v4.6.1...v5.0.0
[v5.0.1]: https://github.com/sous-chefs/haproxy/compare/v5.0.0...v5.0.1
[v5.0.2]: https://github.com/sous-chefs/haproxy/compare/v5.0.1...v5.0.2
[v5.0.3]: https://github.com/sous-chefs/haproxy/compare/v5.0.2...v5.0.3
[v5.0.4]: https://github.com/sous-chefs/haproxy/compare/v5.0.3...v5.0.4
[v6.0.0]: https://github.com/sous-chefs/haproxy/compare/v5.0.4...v6.0.0
[v6.1.0]: https://github.com/sous-chefs/haproxy/compare/v6.0.0...v6.1.0
[v6.2.0]: https://github.com/sous-chefs/haproxy/compare/v6.1.0...v6.2.0
[v6.2.1]: https://github.com/sous-chefs/haproxy/compare/v6.2.0...v6.2.1
[v6.2.2]: https://github.com/sous-chefs/haproxy/compare/v6.2.1...v6.2.2
[v6.2.3]: https://github.com/sous-chefs/haproxy/compare/v6.2.2...v6.2.3
[v6.2.4]: https://github.com/sous-chefs/haproxy/compare/v6.2.3...v6.2.4
[v6.2.5]: https://github.com/sous-chefs/haproxy/compare/v6.2.4...v6.2.5
[v6.2.6]: https://github.com/sous-chefs/haproxy/compare/v6.2.5...v6.2.6
[v6.2.7]: https://github.com/sous-chefs/haproxy/compare/v6.2.6...v6.2.7
[v6.3.0]: https://github.com/sous-chefs/haproxy/compare/v6.2.7...v6.3.0
[v6.4.0]: https://github.com/sous-chefs/haproxy/compare/v6.3.0...v6.4.0
[v7.0.0]: https://github.com/sous-chefs/haproxy/compare/v6.4.0...v7.0.0
[v7.1.0]: https://github.com/sous-chefs/haproxy/compare/v7.0.0...v7.1.0
[v8.0.0]: https://github.com/sous-chefs/haproxy/compare/v7.1.0...v8.0.0
[v8.1.0]: https://github.com/sous-chefs/haproxy/compare/v8.0.0...v8.1.0
[v8.1.1]: https://github.com/sous-chefs/haproxy/compare/v8.1.0...v8.1.1
[v8.2.0]: https://github.com/sous-chefs/haproxy/compare/v8.1.1...v8.2.0
[v8.3.0]: https://github.com/sous-chefs/haproxy/compare/v8.2.0...v8.3.0
| 27.218085 | 471 | 0.714335 | eng_Latn | 0.820171 |
f488557c0662e110d76e56d64b2a39c6024dd851 | 374 | md | Markdown | content/blog/56/index.md | sengmitnick/sengmitnick | a01b1e5fc805358d11f146de4fc1e85d9f88086f | [
"MIT"
] | null | null | null | content/blog/56/index.md | sengmitnick/sengmitnick | a01b1e5fc805358d11f146de4fc1e85d9f88086f | [
"MIT"
] | 1 | 2021-04-08T07:24:52.000Z | 2021-04-08T07:24:52.000Z | content/blog/56/index.md | sengmitnick/sengmitnick | a01b1e5fc805358d11f146de4fc1e85d9f88086f | [
"MIT"
] | 1 | 2021-04-08T07:23:32.000Z | 2021-04-08T07:23:32.000Z | ---
mark: original
title: 基于SHT3x传感器开发的C51程序
date: 2014-08-29 12:28:29
categories: [technology]
tags: []
author: SengMitnick
linktitle: "56"
comments: true
toc: false
---
其中,在看数据手册时发现一个问题:<!--more-->
{{<img name="1.png">}}
从中可以知道,`ADDR`的地址为`0x44`,但是在写程序的时候一直没有应答,这个问题使我非常疑惑,直到后来才发现,I2C的地址应该使用7位来表示,最后一位为读写位,即若进行写操作地址应该为`0x44<<1 + 0 = 0x88`,进行读操作地址应该为`0x44<<1 + 1 = 0x89`。
| 22 | 148 | 0.721925 | zho_Hans | 0.155238 |
f48880fa65fd9d2ce831a8598f4ab1163f5be9f5 | 7,891 | md | Markdown | articles/mysql/howto-read-replicas-cli.md | ShowMeMore/azure-docs | d6b68b907e5158b451239e4c09bb55eccb5fef89 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-08-07T17:26:46.000Z | 2020-08-07T17:26:46.000Z | articles/mysql/howto-read-replicas-cli.md | osanam-giordane/azure-docs | 68927e94a6016bbfea0f321673f7eafeb65a5d07 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-12T00:05:28.000Z | 2019-07-09T09:39:55.000Z | articles/mysql/howto-read-replicas-cli.md | osanam-giordane/azure-docs | 68927e94a6016bbfea0f321673f7eafeb65a5d07 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Create & manage read replicas - Azure Database for MySQL
description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure CLI or REST API.
author: ajlam
ms.author: andrela
ms.service: mysql
ms.topic: conceptual
ms.date: 09/14/2019
---
# How to create and manage read replicas in Azure Database for MySQL using the Azure CLI and REST API
In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
## Azure CLI
You can create and manage read replicas using the Azure CLI.
### Prerequisites
- [Install Azure CLI 2.0](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest)
- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md) that will be used as the master server.
> [!IMPORTANT]
> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the master server is in one of these pricing tiers.
### Create a read replica
A read replica server can be created using the following command:
```azurecli-interactive
az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup
```
The `az mysql server replica create` command requires the following parameters:
| Setting | Example value | Description |
| --- | --- | --- |
| resource-group | myresourcegroup | The resource group where the replica server will be created to. |
| name | mydemoreplicaserver | The name of the new replica server that is created. |
| source-server | mydemoserver | The name or ID of the existing master server to replicate from. |
To create a cross region read replica, use the `--location` parameter. The CLI example below creates the replica in West US.
```azurecli-interactive
az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup --location westus
```
> [!NOTE]
> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
> [!NOTE]
> Read replicas are created with the same server configuration as the master. The replica server configuration can be changed after it has been created. It is recommended that the replica server's configuration should be kept at equal or greater values than the master to ensure the replica is able to keep up with the master.
### List replicas for a master server
To view all replicas for a given master server, run the following command:
```azurecli-interactive
az mysql server replica list --server-name mydemoserver --resource-group myresourcegroup
```
The `az mysql server replica list` command requires the following parameters:
| Setting | Example value | Description |
| --- | --- | --- |
| resource-group | myresourcegroup | The resource group where the replica server will be created to. |
| server-name | mydemoserver | The name or ID of the master server. |
### Stop replication to a replica server
> [!IMPORTANT]
> Stopping replication to a server is irreversible. Once replication has stopped between a master and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again.
Replication to a read replica server can be stopped using the following command:
```azurecli-interactive
az mysql server replica stop --name mydemoreplicaserver --resource-group myresourcegroup
```
The `az mysql server replica stop` command requires the following parameters:
| Setting | Example value | Description |
| --- | --- | --- |
| resource-group | myresourcegroup | The resource group where the replica server exists. |
| name | mydemoreplicaserver | The name of the replica server to stop replication on. |
### Delete a replica server
Deleting a read replica server can be done by running the **[az mysql server delete](/cli/azure/mysql/server)** command.
```azurecli-interactive
az mysql server delete --resource-group myresourcegroup --name mydemoreplicaserver
```
### Delete a master server
> [!IMPORTANT]
> Deleting a master server stops replication to all replica servers and deletes the master server itself. Replica servers become standalone servers that now support both read and writes.
To delete a master server, you can run the **[az mysql server delete](/cli/azure/mysql/server)** command.
```azurecli-interactive
az mysql server delete --resource-group myresourcegroup --name mydemoserver
```
## REST API
You can create and manage read replicas using the [Azure REST API](/rest/api/azure/).
### Create a read replica
You can create a read replica by using the [create API](/rest/api/mysql/servers/create):
```http
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{replicaName}?api-version=2017-12-01
```
```json
{
"location": "southeastasia",
"properties": {
"createMode": "Replica",
"sourceServerId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}"
}
}
```
> [!NOTE]
> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized master server and restarted the server, you receive an error. Complete those two steps before you create a replica.
A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the master server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
> [!IMPORTANT]
> Before a master server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
### List replicas
You can view the list of replicas of a master server using the [replica list API](/rest/api/mysql/replicas/listbyserver):
```http
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}/Replicas?api-version=2017-12-01
```
### Stop replication to a replica server
You can stop replication between a master server and a read replica by using the [update API](/rest/api/mysql/servers/update).
After you stop replication to a master server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
```http
PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}?api-version=2017-12-01
```
```json
{
"properties": {
"replicationRole":"None"
}
}
```
### Delete a master or replica server
To delete a master or replica server, you use the [delete API](/rest/api/mysql/servers/delete):
When you delete a master server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
```http
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{serverName}?api-version=2017-12-01
```
## Next steps
- Learn more about [read replicas](concepts-read-replicas.md)
| 44.835227 | 333 | 0.768851 | eng_Latn | 0.982457 |
f48886fd878def71af203104504d90e07ca3865d | 17,439 | md | Markdown | articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md | tokawa-ms/azure-docs.ja-jp | c1db876db6cde4ac449e959e3bfe39f941660f8a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md | tokawa-ms/azure-docs.ja-jp | c1db876db6cde4ac449e959e3bfe39f941660f8a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md | tokawa-ms/azure-docs.ja-jp | c1db876db6cde4ac449e959e3bfe39f941660f8a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure Site Recovery でインストールのディザスター リカバリー用のモビリティ サービスを自動化する
description: Azure Site Recovery で、VMware/物理サーバーのディザスター リカバリー用のモビリティ サービスのインストールを自動化する方法。
author: Rajeswari-Mamilla
ms.topic: how-to
ms.date: 2/5/2020
ms.author: ramamill
ms.openlocfilehash: 2159ab8c2639f0f87fd53e8559dad518a3daa663
ms.sourcegitcommit: d767156543e16e816fc8a0c3777f033d649ffd3c
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 10/26/2020
ms.locfileid: "92544819"
---
# <a name="automate-mobility-service-installation"></a>モビリティ サービスのインストールを自動化する
この記事では、[Azure Site Recovery](site-recovery-overview.md)で Mobility Service エージェントのインストールと更新を自動化する方法について説明します。
オンプレミスの VMware VM と物理サーバーのディザスター リカバリーのために Site Recovery を Azure に展開する場合、複製する各マシンに Mobility Service エージェントをインストールします。 モビリティサービスは、マシン上のデータ書き込みをキャプチャし、複製のために Site Recovery プロセスサーバーに転送します。 Mobility Service はいくつかの方法で展開できます。
- **プッシュ インストール** :Azure portal でマシンのレプリケーションを有効にする場合に、Site Recovery によってモビリティ サービス エージェントをインストールします。
- **手動インストール** :各マシンにモビリティ サービスを手動でインストールします。 プッシュ インストールと手動インストールの[詳細情報](vmware-physical-mobility-service-overview.md)を参照してください。
- **自動デプロイ** :Microsoft Endpoint Configuration Manager などのソフトウェア デプロイ ツールや、Intigua JetPatch などのサード パーティ ツールなどで、インストールを自動化します。
インストールと更新を自動化することで、以下の場合の解決策が得られます。
- 組織で、保護されたサーバーへのプッシュ インストールが許可されない。
- 会社のポリシーで、パスワードを定期的に変更することが求められている。 プッシュ インストール用のパスワードを指定する必要がある。
- セキュリティ ポリシーで、特定のマシンにファイアウォール例外を追加することが許可されていない。
- ホスティング サービス プロバイダーとしてサービスを提供しており、Site Recovery を使用したプッシュ インストールに必要な顧客のマシン資格情報を提供したくない。
- エージェントのインストールを多数のサーバーに同時にスケーリングする必要があります。
- 計画メンテナンスの期間中にインストールとアップグレードをスケジュールしたい。
## <a name="prerequisites"></a>前提条件
インストールを自動化するには、次のアイテムが必要です。
- [Configuration Manager](/configmgr/) や [JetPatch](https://jetpatch.com/microsoft-azure-site-recovery/) などの、デプロイ済みのソフトウェア インストール ソリューション。
- [Azure](tutorial-prepare-azure.md) と[オンプレミス](vmware-azure-tutorial-prepare-on-premises.md)で、VMware のディザスター リカバリー用、または[物理サーバー](physical-azure-disaster-recovery.md)のディザスター リカバリー用のデプロイの前提条件が満たされている。 ディザスター リカバリーのための[サポート要件](vmware-physical-azure-support-matrix.md)を確認してください。
## <a name="prepare-for-automated-deployment"></a>自動デプロイを準備する
次の表に、Mobility Service のデプロイを自動化するためのツールとプロセスをまとめます。
**ツール** | **詳細** | **手順**
--- | --- | ---
**構成マネージャー** | 1.上記の[前提条件](#prerequisites)が満たされていることを確認します。 <br/><br/> 2.ソース環境をセットアップ (OVF テンプレートを使用して Site Recovery 構成サーバーを VMware VM としてデプロイするための OVA ファイルのダウンロードなど) して、ディザスター リカバリーをデプロイします。<br/><br/> 3.構成サーバーを Site Recovery サービスに登録し、ターゲット Azure 環境を設定して、レプリケーション ポリシーを構成します。<br/><br/> 4.モビリティサービスの自動デプロイでは、構成サーバーのパスフレーズとモビリティサービスのインストールファイルを含むネットワーク共有を作成します。<br/><br/> 5.インストールまたは更新を含む Configuration Manager パッケージを作成し、Mobility サービスのデプロイの準備を行います。<br/><br/> 6.その後、Mobility Service がインストールされているマシンの Azure へのレプリケーションを有効にできます。 | [Configuration Manager で自動化](#automate-with-configuration-manager)
**JetPatch** | 1.上記の[前提条件](#prerequisites)が満たされていることを確認します。 <br/><br/> 2.ソース環境をセットアップ (OVF テンプレートを使用して Azure Site Recovery 用の JetPatch Agent Manager を Site Recovery 環境にダウンロードしてデプロイするなど) して、ディザスター リカバリーをデプロイします。<br/><br/> 3.構成サーバーを Site Recovery に登録し、ターゲット Azure 環境を設定して、レプリケーション ポリシーを構成します。<br/><br/> 4.自動デプロイのために、JetPatch Agent Manager の構成を初期化して完了します。<br/><br/> 5.JetPatch では、サイト回復ポリシーを作成して、Mobility サービスエージェントのデプロイとアップグレードを自動化できます。 <br/><br/> 6.その後、Mobility Service がインストールされているマシンの Azure へのレプリケーションを有効にできます。 | [JetPatch Agent Manager による自動化](https://jetpatch.com/microsoft-azure-site-recovery-deployment-guide/)<br/><br/> [JetPatch でのエージェントインストールのトラブルシューティング](https://kc.jetpatch.com/hc/articles/360035981812)
## <a name="automate-with-configuration-manager"></a>Configuration Manager を使用して自動化する
### <a name="prepare-the-installation-files"></a>インストール ファイルを準備する
1. 前提条件を満たしていることを確認します。
1. 構成サーバーを実行しているマシンからアクセスできる、セキュリティで保護されたネットワーク ファイル共有 (SMB 共有) を作成します。
1. Configuration Manager で、モビリティサービスをインストールまたは更新する[サーバーをカテゴリー化します](/sccm/core/clients/manage/collections/automatically-categorize-devices-into-collections)。 1 つのコレクションにはすべての Windows サーバが含まれ、もう 1 つのコレクションにはすべての Linux サーバが含まれます。
1. ネットワーク共有で、次のフォルダーを作成します。
- Windows マシンにインストールする場合は、 _MobSvcWindows_ という名前のフォルダーを作成してください。
- Linux マシンにインストールする場合は、 _MobSvcLinux_ という名前のフォルダーを作成します。
1. 構成サーバー マシンにサインインします。
1. 構成サーバーマシンで、管理コマンド プロンプトを開きます。
1. パスフレーズファイルを生成するには、次のコマンドを実行します。
```Console
cd %ProgramData%\ASR\home\svsystems\bin
genpassphrase.exe -v > MobSvc.passphrase
```
1. _MobSvc をコピーします。パスフレーズ_ ファイルをWindows フォルダと Linux フォルダにコピーします。
1. インストールファイルを含むフォルダを参照するには、次のコマンドを実行します。
```Console
cd %ProgramData%\ASR\home\svsystems\pushinstallsvc\repository
```
1. これらのインストール ファイルをネットワーク共有にコピーします。
- Windows の場合、 _Microsoft-ASR_UA_version_Windows_GA_date_Release.exe_ を _MobSvcWindows_ にコピーします。
- Linux の場合、次のファイルを _MobSvcLinux_ にコピーします。
- _Microsoft-ASR_UARHEL6-64release.tar.gz_
- _Microsoft-ASR_UARHEL7-64release.tar.gz_
- _Microsoft-ASR_UASLES11-SP3-64release.tar.gz_
- _Microsoft-ASR_UASLES11-SP4-64release.tar.gz_
- _Microsoft-ASR_UAOL6-64release.tar.gz_
- _Microsoft-ASR_UAUBUNTU-14.04-64release.tar.gz_
1. 次の手順で説明するように、コードを Windows または Linux フォルダーにコピーします。 次のことが前提となっています。
- 構成サーバーの IP アドレスは`192.168.3.121`です。
- 安全なネットワークファイル共有は`\\ContosoSecureFS\MobilityServiceInstallers`です。
### <a name="copy-code-to-the-windows-folder"></a>Windows フォルダーにコードをコピーする
次のコードをコピーします。
- _MobSvcWindows_ フォルダーにコードを _install.bat_ として保存します。
- このスクリプトの`[CSIP]`プレースホルダーを、構成サーバーのIPアドレスの実際の値に置き換えます。
- このスクリプトは、Mobility Service エージェントの新規インストールと、すでにインストールされているエージェントの更新をサポートしています。
```DOS
Time /t >> C:\Temp\logfile.log
REM ==================================================
REM ==== Clean up the folders ========================
RMDIR /S /q %temp%\MobSvc
MKDIR %Temp%\MobSvc
MKDIR C:\Temp
REM ==================================================
REM ==== Copy new files ==============================
COPY M*.* %Temp%\MobSvc
CD %Temp%\MobSvc
REN Micro*.exe MobSvcInstaller.exe
REM ==================================================
REM ==== Extract the installer =======================
MobSvcInstaller.exe /q /x:%Temp%\MobSvc\Extracted
REM ==== Wait 10s for extraction to complete =========
TIMEOUT /t 10
REM =================================================
REM ==== Perform installation =======================
REM =================================================
CD %Temp%\MobSvc\Extracted
whoami >> C:\Temp\logfile.log
SET PRODKEY=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
REG QUERY %PRODKEY%\{275197FC-14FD-4560-A5EB-38217F80CBD1}
IF NOT %ERRORLEVEL% EQU 0 (
echo "Product is not installed. Goto INSTALL." >> C:\Temp\logfile.log
GOTO :INSTALL
) ELSE (
echo "Product is installed." >> C:\Temp\logfile.log
echo "Checking for Post-install action status." >> C:\Temp\logfile.log
GOTO :POSTINSTALLCHECK
)
:POSTINSTALLCHECK
REG QUERY "HKLM\SOFTWARE\Wow6432Node\InMage Systems\Installed Products\5" /v "PostInstallActions" | Find "Succeeded"
If %ERRORLEVEL% EQU 0 (
echo "Post-install actions succeeded. Checking for Configuration status." >> C:\Temp\logfile.log
GOTO :CONFIGURATIONCHECK
) ELSE (
echo "Post-install actions didn't succeed. Goto INSTALL." >> C:\Temp\logfile.log
GOTO :INSTALL
)
:CONFIGURATIONCHECK
REG QUERY "HKLM\SOFTWARE\Wow6432Node\InMage Systems\Installed Products\5" /v "AgentConfigurationStatus" | Find "Succeeded"
If %ERRORLEVEL% EQU 0 (
echo "Configuration has succeeded. Goto UPGRADE." >> C:\Temp\logfile.log
GOTO :UPGRADE
) ELSE (
echo "Configuration didn't succeed. Goto CONFIGURE." >> C:\Temp\logfile.log
GOTO :CONFIGURE
)
:INSTALL
echo "Perform installation." >> C:\Temp\logfile.log
UnifiedAgent.exe /Role MS /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" /Platform "VmWare" /Silent
IF %ERRORLEVEL% EQU 0 (
echo "Installation has succeeded." >> C:\Temp\logfile.log
(GOTO :CONFIGURE)
) ELSE (
echo "Installation has failed." >> C:\Temp\logfile.log
GOTO :ENDSCRIPT
)
:CONFIGURE
echo "Perform configuration." >> C:\Temp\logfile.log
cd "C:\Program Files (x86)\Microsoft Azure Site Recovery\agent"
UnifiedAgentConfigurator.exe /CSEndPoint "[CSIP]" /PassphraseFilePath %Temp%\MobSvc\MobSvc.passphrase
IF %ERRORLEVEL% EQU 0 (
echo "Configuration has succeeded." >> C:\Temp\logfile.log
) ELSE (
echo "Configuration has failed." >> C:\Temp\logfile.log
)
GOTO :ENDSCRIPT
:UPGRADE
echo "Perform upgrade." >> C:\Temp\logfile.log
UnifiedAgent.exe /Platform "VmWare" /Silent
IF %ERRORLEVEL% EQU 0 (
echo "Upgrade has succeeded." >> C:\Temp\logfile.log
) ELSE (
echo "Upgrade has failed." >> C:\Temp\logfile.log
)
GOTO :ENDSCRIPT
:ENDSCRIPT
echo "End of script." >> C:\Temp\logfile.log
```
### <a name="copy-code-to-the-linux-folder"></a>Linux フォルダーにコードをコピーする
次のコードをコピーします。
- _MobSvcLinux_ フォルダーにコードを _install_linux.sh_ として保存します。
- このスクリプトの`[CSIP]`プレースホルダーを、構成サーバーのIPアドレスの実際の値に置き換えます。
- このスクリプトは、Mobility Service エージェントの新規インストールと、すでにインストールされているエージェントの更新をサポートしています。
```Bash
#!/usr/bin/env bash
rm -rf /tmp/MobSvc
mkdir -p /tmp/MobSvc
INSTALL_DIR='/usr/local/ASR'
VX_VERSION_FILE='/usr/local/.vx_version'
echo "=============================" >> /tmp/MobSvc/sccm.log
echo `date` >> /tmp/MobSvc/sccm.log
echo "=============================" >> /tmp/MobSvc/sccm.log
if [ -f /etc/oracle-release ] && [ -f /etc/redhat-release ]; then
if grep -q 'Oracle Linux Server release 6.*' /etc/oracle-release; then
if uname -a | grep -q x86_64; then
OS="OL6-64"
echo $OS >> /tmp/MobSvc/sccm.log
cp *OL6*.tar.gz /tmp/MobSvc
fi
fi
elif [ -f /etc/redhat-release ]; then
if grep -q 'Red Hat Enterprise Linux Server release 6.* (Santiago)' /etc/redhat-release || \
grep -q 'CentOS Linux release 6.* (Final)' /etc/redhat-release || \
grep -q 'CentOS release 6.* (Final)' /etc/redhat-release; then
if uname -a | grep -q x86_64; then
OS="RHEL6-64"
echo $OS >> /tmp/MobSvc/sccm.log
cp *RHEL6*.tar.gz /tmp/MobSvc
fi
elif grep -q 'Red Hat Enterprise Linux Server release 7.* (Maipo)' /etc/redhat-release || \
grep -q 'CentOS Linux release 7.* (Core)' /etc/redhat-release; then
if uname -a | grep -q x86_64; then
OS="RHEL7-64"
echo $OS >> /tmp/MobSvc/sccm.log
cp *RHEL7*.tar.gz /tmp/MobSvc
fi
fi
elif [ -f /etc/SuSE-release ] && grep -q 'VERSION = 11' /etc/SuSE-release; then
if grep -q "SUSE Linux Enterprise Server 11" /etc/SuSE-release && grep -q 'PATCHLEVEL = 3' /etc/SuSE-release; then
if uname -a | grep -q x86_64; then
OS="SLES11-SP3-64"
echo $OS >> /tmp/MobSvc/sccm.log
cp *SLES11-SP3*.tar.gz /tmp/MobSvc
fi
elif grep -q "SUSE Linux Enterprise Server 11" /etc/SuSE-release && grep -q 'PATCHLEVEL = 4' /etc/SuSE-release; then
if uname -a | grep -q x86_64; then
OS="SLES11-SP4-64"
echo $OS >> /tmp/MobSvc/sccm.log
cp *SLES11-SP4*.tar.gz /tmp/MobSvc
fi
fi
elif [ -f /etc/lsb-release ] ; then
if grep -q 'DISTRIB_RELEASE=14.04' /etc/lsb-release ; then
if uname -a | grep -q x86_64; then
OS="UBUNTU-14.04-64"
echo $OS >> /tmp/MobSvc/sccm.log
cp *UBUNTU-14*.tar.gz /tmp/MobSvc
fi
fi
else
exit 1
fi
if [ -z "$OS" ]; then
exit 1
fi
Install()
{
echo "Perform Installation." >> /tmp/MobSvc/sccm.log
./install -q -d ${INSTALL_DIR} -r MS -v VmWare
RET_VAL=$?
echo "Installation Returncode: $RET_VAL" >> /tmp/MobSvc/sccm.log
if [ $RET_VAL -eq 0 ]; then
echo "Installation has succeeded. Proceed to configuration." >> /tmp/MobSvc/sccm.log
Configure
else
echo "Installation has failed." >> /tmp/MobSvc/sccm.log
exit $RET_VAL
fi
}
Configure()
{
echo "Perform configuration." >> /tmp/MobSvc/sccm.log
${INSTALL_DIR}/Vx/bin/UnifiedAgentConfigurator.sh -i [CSIP] -P MobSvc.passphrase
RET_VAL=$?
echo "Configuration Returncode: $RET_VAL" >> /tmp/MobSvc/sccm.log
if [ $RET_VAL -eq 0 ]; then
echo "Configuration has succeeded." >> /tmp/MobSvc/sccm.log
else
echo "Configuration has failed." >> /tmp/MobSvc/sccm.log
exit $RET_VAL
fi
}
Upgrade()
{
echo "Perform Upgrade." >> /tmp/MobSvc/sccm.log
./install -q -v VmWare
RET_VAL=$?
echo "Upgrade Returncode: $RET_VAL" >> /tmp/MobSvc/sccm.log
if [ $RET_VAL -eq 0 ]; then
echo "Upgrade has succeeded." >> /tmp/MobSvc/sccm.log
else
echo "Upgrade has failed." >> /tmp/MobSvc/sccm.log
exit $RET_VAL
fi
}
cp MobSvc.passphrase /tmp/MobSvc
cd /tmp/MobSvc
tar -zxvf *.tar.gz
if [ -e ${VX_VERSION_FILE} ]; then
echo "${VX_VERSION_FILE} exists. Checking for configuration status." >> /tmp/MobSvc/sccm.log
agent_configuration=$(grep ^AGENT_CONFIGURATION_STATUS "${VX_VERSION_FILE}" | cut -d"=" -f2 | tr -d " ")
echo "agent_configuration=$agent_configuration" >> /tmp/MobSvc/sccm.log
if [ "$agent_configuration" == "Succeeded" ]; then
echo "Agent is already configured. Proceed to Upgrade." >> /tmp/MobSvc/sccm.log
Upgrade
else
echo "Agent is not configured. Proceed to Configure." >> /tmp/MobSvc/sccm.log
Configure
fi
else
Install
fi
cd /tmp
```
### <a name="create-a-package"></a>パッケージを作成する
1. Configuration Manager コンソールにサインインし、 **ソフトウェアライブラリ** > **アプリケーション管理** > **パッケージ** に移動します。
1. **[パッケージ]** を右クリックして **[パッケージの作成]** に進みます。
1. 名前、説明、製造元、言語、バージョンなど、パッケージの詳細を指定します。
1. **[このパッケージはソース ファイルを含む]** を選択します。
1. **参照** をクリックし、関連するインストーラー( _MobSvcWindows_ または _MobSvcLinux_ )を含むネットワーク共有とフォルダーを選択します。 次に、 **[次へ]** を選択します。
![パッケージとプログラムの作成ウィザードのスクリーンショット](./media/vmware-azure-mobility-install-configuration-mgr/create_sccm_package.png)
1. **[作成するプログラムの種類の選択]** ページで、 **[標準プログラム]** > **[次へ]** を選択します。
![[標準プログラム] オプションを示すパッケージとプログラムの作成ウィザードのスクリーンショット。](./media/vmware-azure-mobility-install-configuration-mgr/sccm-standard-program.png)
1. **[この標準プログラムに関する情報の指定]** ページで、次の値を指定します。
**パラメーター** | **Windows の値** | **Linux の値**
--- | --- | ---
**名前** | Microsoft Azure Mobility Service のインストール (Windows) | Microsoft Azure Mobility Service のインストール (Linux)。
**コマンド ライン** | install.bat | ./install_linux.sh
**プログラムの実行条件** | ユーザーがログオンしているかどうか | ユーザーがログオンしているかどうか
**その他のパラメーター** | 既定の設定を使用する | 既定の設定を使用する
![標準プログラムに指定できる情報を示すスクリーンショット。](./media/vmware-azure-mobility-install-configuration-mgr/sccm-program-properties.png)
1. **この標準プログラムの要件を指定** で、次のタスクを実行します。
- Windows マシンの場合は、 **[指定したプラットフォームだけで実行可能]** を選択します。 次に、 [サポートされているWindowsオペレーティングシステム](vmware-physical-azure-support-matrix.md#replicated-machines)を選択し、 **次へ** を選択します。
- Linux マシンでは、 **[任意のプラットフォームで実行可能]** を選択します。 **[次へ]** を選択します。
1. ウィザードを終了します。
### <a name="deploy-the-package"></a>パッケージをデプロイする
1. Configuration Manager コンソールで、パッケージを右クリックし、 **コンテンツの配布** を選択します。
![Configuration Manager コンソールのスクリーンショット](./media/vmware-azure-mobility-install-configuration-mgr/sccm_distribute.png)
1. パッケージのコピー先とする配布ポイントを選択します。 [詳細については、こちらを参照してください](/sccm/core/servers/deploy/configure/install-and-configure-distribution-points)。
1. ウィザードを完了します。 指定した配布ポイントへのパッケージのレプリケートが開始されます。
1. パッケージの配布が完了したら、パッケージを右クリックして **[展開]** に進みます。
![[展開] メニュー オプションを表示する Configuration Manager コンソールのスクリーンショット。](./media/vmware-azure-mobility-install-configuration-mgr/sccm_deploy.png)
1. 以前に作成した Windows または Linux デバイスのコレクションを選択します。
1. **[コンテンツの配布先の指定]** ページで **[配布ポイント]** を選択します。
1. **[このソフトウェアの展開を制御する設定の指定]** ページで、 **[目的]** を **[必須]** に設定します。
![ソフトウェアの展開ウィザードのスクリーンショット](./media/vmware-azure-mobility-install-configuration-mgr/sccm-deploy-select-purpose.png)
1. **[この展開のスケジュールを指定します]** でスケジュールを設定します。 [詳細については、こちらを参照してください](/sccm/apps/deploy-use/deploy-applications#bkmk_deploy-sched)。
- モビリティサービスは、指定したスケジュールに従ってインストールされます。
- 不要な再起動を避けるためには、月次のメンテナンス期間中またはソフトウェアの更新期間中にパッケージのインストールをスケジュールします。
1. **[配布ポイント]** ページで、設定を構成し、ウィザードを完了します。
1. Configuration Manager コンソールでデプロイの進行状況を監視します。 **[監視]** > **[デプロイ]** > _\<your package name\>_ に移動します。
### <a name="uninstall-the-mobility-service"></a>モビリティサービスをアンインストールする
Configuration Manager パッケージを作成して、Mobility Service をアンインストールできます。 たとえば、次のスクリプトはモビリティサービスをアンインストールします。
```DOS
Time /t >> C:\logfile.log
REM ==================================================
REM ==== Check if Mob Svc is already installed =======
REM ==== If not installed no operation required ========
REM ==== Else run uninstall command =====================
REM ==== {275197FC-14FD-4560-A5EB-38217F80CBD1} is ====
REM ==== guid for Mob Svc Installer ====================
whoami >> C:\logfile.log
NET START | FIND "InMage Scout Application Service"
IF %ERRORLEVEL% EQU 1 (GOTO :INSTALL) ELSE GOTO :UNINSTALL
:NOOPERATION
echo "No Operation Required." >> c:\logfile.log
GOTO :ENDSCRIPT
:UNINSTALL
echo "Uninstall" >> C:\logfile.log
MsiExec.exe /qn /x {275197FC-14FD-4560-A5EB-38217F80CBD1} /L+*V "C:\ProgramData\ASRSetupLogs\UnifiedAgentMSIUninstall.log"
:ENDSCRIPT
```
## <a name="next-steps"></a>次のステップ
VM の[レプリケーションを有効にします](vmware-azure-enable-replication.md)。
| 41.129717 | 712 | 0.692012 | yue_Hant | 0.733966 |
f48931195bb859d46018b9eef2d84e751fd11993 | 463 | md | Markdown | README.md | dsfbm/mh_uccn | 2362aee79c285590039e532249520af5c698676a | [
"Apache-2.0"
] | null | null | null | README.md | dsfbm/mh_uccn | 2362aee79c285590039e532249520af5c698676a | [
"Apache-2.0"
] | null | null | null | README.md | dsfbm/mh_uccn | 2362aee79c285590039e532249520af5c698676a | [
"Apache-2.0"
] | null | null | null | # uccn
一个简洁的官网
#### 项目介绍
一个简洁的公司官网主页,使用spring boot作为后台框架.
#### 软件架构
- 后台框架:Spring Boot 2.0.3.RELEASE
- 缓存:Redis
- <del>文件系统:FastDFS</del>
- 图床:SM
- 连接池:druid
- 数据库:MySQL
- 数据库操作框架:mybatis
#### 安装教程
<del>FastDFS因为安装比较麻烦所以我使用docker安装,下面是安装教程:</del>
<del>[FastDFS安装](https://hwy.ac.cn/post/c91g3b0dgngp.html)</del>
因为使用FastDFS文件系统过于麻烦,现在改用公共图床,好处就是方便,缺点是现在每张图片不能大于5M,一般也够用了
## 门户网站
## 次项目在2020年5月在腾讯云工坊进行项目展示时所使用的项目,后面开发可能会用到,上传到github,另外有当时所用到的ppt讲解一并上传上来。
| 17.148148 | 75 | 0.75162 | yue_Hant | 0.848732 |
f48a2cbcc659bbcee8394807982643de16fab0fc | 14,804 | md | Markdown | README.md | vjchong888/aml- | 8d82fa8fbe303d050cea4795ed1e1a2953a88d80 | [
"MIT"
] | null | null | null | README.md | vjchong888/aml- | 8d82fa8fbe303d050cea4795ed1e1a2953a88d80 | [
"MIT"
] | null | null | null | README.md | vjchong888/aml- | 8d82fa8fbe303d050cea4795ed1e1a2953a88d80 | [
"MIT"
] | null | null | null | # Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project
## AML & ALL Detection Classifiers
[![CURRENT RELEASE](https://img.shields.io/badge/CURRENT%20RELEASE-0.0.0-blue.svg)](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/0.0.0)
[![UPCOMING RELEASE](https://img.shields.io/badge/UPCOMING%20RELEASE-0.0.1-blue.svg)](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/0.0.1)
![Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project](https://www.PeterMossAmlAllResearch.com/media/images/banner.png)
The Peter Moss Acute Myeloid & Lymphoblastic Leukemia classifiers are a collection of projects that use computer vision to classify AML/ALL in unseen images.
This repository includes classifier projects made with Tensorflow, Caffe, Keras, Intel Movidius (NCS & NCS2) and OpenVino. We aim to create projects in Python, Java, C++, R etc and compare results to find out which types of classifiers are more accurate.
# Data Augmentation
![Acute Myeloid & Lymphoblastic Leukemia Classifier Data Augmentation program](https://www.PeterMossAmlAllResearch.com/media/images/repositories/bannerThin.png)
The [Acute Myeloid & Lymphoblastic Leukemia Classifier Data Augmentation program](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/Augmentation "Acute Myeloid & Lymphoblastic Leukemia Classifier Data Augmentation program") applies filters to datasets and increases the amount of training / test data.
# Python Classifiers
This repository hosts a collection of classifiers that have been developed by the team using the Python programming language. These classifiers include Caffe, FastAI, Movidius, OpenVino, pure Python and Tensorflow classifiers each project may have multiple classifiers.
| Projects | Description | Status |
| -------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | ------- |
| [Caffe Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Caffe/ "Caffe") | AML/ALL classifiers created using the Caffe framework. | Ongoing |
| [FastAI Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_FastAI/ "FastAI") | AML/ALL classifiers created using the FastAI framework. | Ongoing |
| [Keras Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Keras/ "Keras") | AML/ALL classifiers created using the Keras framework. | Ongoing |
| [Movidius Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Movidius/ "Movidius") | AML/ALL classifiers created using Intel Movidius(NC1/NCS2). | Ongoing |
| [OpenVino Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_OpenVino/ "OpenVino") | AML/ALL classifiers created using Intel OpenVino. | Ongoing |
| [Pure Python Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Pure/ "Pure Python") | AML/ALL classifiers created using pure Python. | Ongoing |
| [Tensorflow Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Tensorflow/ "Tensorflow") | AML/ALL classifiers created using the Tensorflow framework. | Ongoing |
## Caffe Python Classifiers
The Peter Moss Acute Myeloid & Lymphoblastic Leukemia Python Caffe classifiers are a collection of projects that use computer vision written in Python Caffe to classify Acute Myeloid & Lymphoblastic Leukemia in unseen images.
| Projects | Description | Status |
| ----------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------- |
| [AllCNN Caffe Classifier](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Caffe/allCNN "AllCNN Caffe Classifier") | allCNN classifier created using the Caffe framework. | Ongoing |
## Intel Movidius/NCS Python Classifiers
This repository hosts a collection of classifiers that have been developed by the team using Python and Intel Movidius NCS/NCS2.
| Project | Description | Status |
| ----------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | ------- |
| [Movidius NCS](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Movidius/NCS/ "Movidius NCS") | AML/ALL classifiers created using Intel Movidius NCS. | Ongoing |
| [Movidius NCS2](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Movidius/NCS2/ "Movidius NCS2") | AML/ALL classifiers created using Intel Movidius NCS2 & OpenVino. | Ongoing |
## FastAI Python Classifiers
The Peter Moss Acute Myeloid & Lymphoblastic Leukemia Python FastAI classifier projects are a collection of projects that use computer vision programs written using FastAI to classify Acute Myeloid & Lymphoblastic Leukemia in unseen images.
| Model | Project | Description | Status | Author |
| ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Resnet | [FastAI Resnet50 Classifier](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_FastAI/Resnet50/ALL-FastAI-Resnet-50.ipynb "FastAI Resnet50 Classifier") | A FastAI model trained using Resnet50 | Ongoing | [Salvatore Raieli](https://github.com/salvatorera "Salvatore Raieli") / [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") |
| Resnet | [FastAI Resnet50(a) Classifier](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_FastAI/Resnet50/ALL-FastAI-Resnet-50-a.ipynb "FastAI Resnet50(a) Classifier") | A FastAI model trained using Resnet50 | Ongoing | [Salvatore Raieli](https://github.com/salvatorera "Salvatore Raieli") / [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") |
| Resnet | [FastAI Resnet34 Classifier](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_FastAI/Resnet34/ALL-FastAI-Resnet-34.ipynb "FastAI Resnet34 Classifier") | A FastAI model trained using Resnet34 | Ongoing | [Salvatore Raieli](https://github.com/salvatorera "Salvatore Raieli") / [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") |
| Resnet | [FastAI Resnet18 Classifier](https://github.com/AMLResearchProject/AML-ALL-Classifiers/blob/master/Python/_FastAI/Resnet18/ALL_FastAI_Resnet_18.ipynb "FastAI Resnet18 Classifier") | A FastAI model trained using Resnet18 | Ongoing | [Salvatore Raieli](https://github.com/salvatorera "Salvatore Raieli") / [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") |
## Keras Python Classifiers
The Peter Moss Acute Myeloid & Lymphoblastic Leukemia Python Keras classifier projects are a collection of projects that use computer vision programs written using Keras to classify Acute Myeloid & Lymphoblastic Leukemia in unseen images.
| Dataset | Project | Description | Status | Author |
| -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| ALL_IDB2 | [QuantisedCode](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Keras/QuantisedCode/QuantisedCode.ipynb "QuantisedCode") | A model trained to detect ALL using Keras with Tensorflow Backend, [Paper 1](https://airccj.org/CSCP/vol7/csit77505.pdf "Paper 1") and the original [Dataset 2](https://homes.di.unimi.it/scotti/all/#datasets "Dataset 2"). | Ongoing | [Amita Kapoor](https://www.petermossamlallresearch.com/team/amita-kapoor/profile "Amita Kapoor") & [Taru Jain](https://www.petermossamlallresearch.com/students/student/taru-jain/profile "Taru Jain") |
| ALL_IDB1 | [AllCNN](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Keras/AllCNN/Paper_1/ALL_IDB1/Non_Augmented/AllCNN.ipynb "AllCNN") | A model trained to detect ALL using Keras with Tensorflow Backend, [Paper 1](https://airccj.org/CSCP/vol7/csit77505.pdf "Paper 1") and the original [Dataset 1](https://homes.di.unimi.it/scotti/all/#datasets "Dataset 1"). | Ongoing | [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") |
| ALL_IDB2 | [AllCNN](https://github.com/AMLResearchProject/AML-ALL-Classifiers/tree/master/Python/_Keras/AllCNN/Paper_1/ALL_IDB2/Non_Augmented/AllCNN.ipynb "AllCNN") | A model trained to detect ALL using Keras with Tensorflow Backend, [Paper 1](https://airccj.org/CSCP/vol7/csit77505.pdf "Paper 1") and the original [Dataset 2](https://homes.di.unimi.it/scotti/all/#datasets "Dataset 2"). | Ongoing | [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") |
## Detecting Acute Lymphoblastic Leukemia Using Caffe, OpenVino & Neural Compute Stick Series
A series of articles / tutorials by [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") that take you through attempting to replicate the work carried out in the [Acute Myeloid Leukemia Classification Using Convolution Neural Network In Clinical Decision Support System](https://airccj.org/CSCP/vol7/csit77505.pdf "Acute Myeloid Leukemia Classification Using Convolution Neural Network In Clinical Decision Support System") paper.
- [Introduction to convolutional neural networks in Caffe\*](https://github.com/AMLResearchProject/AML-ALL-Classifiers/blob/master/Python/_Caffe/allCNN/Caffe-Layers.md "Introduction to convolutional neural networks in Caffe*")
- [Preparing the Acute Lymphoblastic Leukemia dataset](https://github.com/AMLResearchProject/AML-ALL-Classifiers/blob/master/Python/_Caffe/allCNN/Data-Sorting.md "Preparing the Acute Lymphoblastic Leukemia dataset")
# Contributing
The Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research project encourages and welcomes code contributions, bug fixes and enhancements from the Github.
Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/AML-ALL-Classifiers/blob/master/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find information about our code of conduct on this page.
## Team Contributors
The following Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research project team members have contributed towards this repository:
- [Adam Milton-Barker](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "Adam Milton-Barker") - Bigfinite IoT Network Engineer & Intel Software Innovator, Barcelona, Spain
- [Salvatore Raieli](https://www.petermossamlallresearch.com/team/salvatore-raieli/profile "Salvatore Raieli") - PhD Immunolgy / Bioinformaticia, Bologna, Italy
- [Dr Amita Kapoor](https://www.petermossamlallresearch.com/team/amita-kapoor/profile "Dr Amita Kapoor") - Delhi University, Delhi, India
## Student Program Contributors
The following AML/ALL AI Student Program Students & Research Interns have contributed towards this repository:
- [Taru Jain](https://www.petermossamlallresearch.com/students/student/taru-jain/profile "Taru Jain") - Pre-final year undergraduate pursuing B.Tech in IT
# Versioning
We use SemVer for versioning. For the versions available, see [Releases](https://github.com/AMLResearchProject/AML-ALL-Classifiers/releases "Releases").
# License
This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/AML-ALL-Classifiers/blob/master/LICENSE "LICENSE") file for details.
# Bugs/Issues
We use the [repo issues](https://github.com/AMLResearchProject/AML-ALL-Classifiers/issues "repo issues") to track bugs and general requests related to using this project.
| 117.492063 | 602 | 0.611321 | yue_Hant | 0.397611 |
f48a3905c6eae8b5592d05e0b2f3477bb8b2899f | 1,996 | md | Markdown | talks.md | ElRatonDeFuego/vfarcic.github.io | 527397d3b02faceec2562e90a8b74a136d68978f | [
"MIT"
] | null | null | null | talks.md | ElRatonDeFuego/vfarcic.github.io | 527397d3b02faceec2562e90a8b74a136d68978f | [
"MIT"
] | null | null | null | talks.md | ElRatonDeFuego/vfarcic.github.io | 527397d3b02faceec2562e90a8b74a136d68978f | [
"MIT"
] | null | null | null | # Talks
---
* [Combining Serverless Continuous Delivery With ChatOps](jx/prow.html) ([abstract](https://github.com/vfarcic/vfarcic.github.io/blob/master/jx/abstracts/prow.md))
* [Ten Commandments Of GitOps Applied To Continuous Delivery](jx/gitops.html) ([abstract](https://github.com/vfarcic/vfarcic.github.io/blob/master/jx/abstracts/ten-commandments.md))
* [The Recipe For Continuous Delivery](jx/recipe.html) ([abstract](https://github.com/vfarcic/vfarcic.github.io/blob/master/jx/abstracts/recipe.md))
* [Monitoring, Logging, and Auto-Scaling Kubernetes](devops25/index.html)
* [Continuous Deployment To Kubernetes](devops24/index.html)
# Talks
---
* [Managing And Maintaining a DevOps Assembly Line](devops-assembly/index.html)
* [The State Of DevOps](devops20/index.html)
* [Self-Healing and Self-Adaptation](devops22/index.html)
* [Zero-Downtime Deployments](devops21/rolling-updates.html)
* [Ten Commandments Of Continuous Delivery](ten-commandments/index.html)
# Talks
---
* [Auto-Scaling Services](devops22/auto-scaling.html)
* [Monitoring Swarm Services](devops21/monitoring.html)
* [Docker Swarm](devops21/index.html)
* [CD with Jenkins and Docker Swarm](jenkins-swarm/index.html)
* [Self-Healing Systems](self-healing/index.html)
# Talks
---
* [Scaling and Clustering with Docker Swarm](docker-swarm/index.html)
* [Continuously Deploying Microservices](cd-microservices/index.html)
* [Continuously Deploying Containers with Jenkins Pipeline To a Docker Swarm Cluster](cd-pipeline-swarm/index.html)
* [CloudBees Jenkins](jenkins/cb.html)
* [Continuous Deployment: The good, the bad, and the ugly](continuous-deployment-best-practices/index.html)
# Talks
---
* [Scaling To Infinity: The Quest For Fully Automated, Scalable, Self-Healing System With Zero-Downtime](scaling/index.html)
* [Continuous Deployment: The Ultimate Culmination of Software Craftsmanship](cd/index.html)
* [Specification By Example using Behavior Driven Development (BDD)](sbe_bdd/index.html)
| 38.384615 | 181 | 0.771042 | yue_Hant | 0.456678 |
f48a4395d1313f554a6f295908a62395172d1e46 | 55,017 | md | Markdown | README.md | lovejoey11/du-app-sign | 41d64c450196e804a1849f7ecb5189a38f744363 | [
"MIT"
] | 238 | 2019-04-04T08:34:51.000Z | 2022-03-14T06:48:51.000Z | README.md | lovejoey11/du-app-sign | 41d64c450196e804a1849f7ecb5189a38f744363 | [
"MIT"
] | 34 | 2019-04-23T08:59:24.000Z | 2022-02-12T03:44:54.000Z | README.md | lovejoey11/du-app-sign | 41d64c450196e804a1849f7ecb5189a38f744363 | [
"MIT"
] | 80 | 2019-04-04T08:36:05.000Z | 2022-02-18T03:12:38.000Z | # 毒app的sign加签破解
**逆向js破解毒sign参数,获取毒app品牌列表页数据,毒app商品详情页数据,最近购买数据,搜索商品数据**<br>
**三种方法分别为:**
- Ⅰ.使用python强转js函数为python函数 ;
- Ⅱ.js2py执行生成sign的js逻辑函数 ;
- Ⅲ.pyexejs执行生成sign的js逻辑函数;
- Ⅳ. md5加密接口约定的post data;
**使用教程:**
(废弃) 【使用请参考本教程】
(新方法) 【参考下列代码实例】
- X前两个api来自于pc端js逆向,后面一个api来自于微信小程序的js逆向,<br>
所以构造请求的时候请注意headers的构造!!!最近购买的请求头特别要注意!
- √目前接口均来自小程序,headers均使用下面代码示例即可请求成功
- sign的密匙列表:
- 关键词搜索商品接口
- X(废弃)'limit20page{}sortMode{}sortType{}titleajunionId19bc545a393a25177083d4a748807cc0'.format(page, sortMode, sortType)
- X(废弃)'https://app.poizon.com/api/v1/h5/product/fire/search/list?title=aj&page={}&sortType={}&sortMode= {}&limit=20&unionId=&sign={}'.format(page, sortType,sortMode, sign)
- X (废弃)page如1为翻页参数,sortType如1为排序方式,sortMode如0
- 品牌系列列表页api的sign密匙构造:
- X(废弃)'lastId{}recommendId{}048a9c4943398714b356a696503d2d36'.format(page, brand_id)
- X(废弃) https://m.poizon.com/mapi/product/recommendDetail?recommendId={}&lastId={}&sign={}
- X(废弃) 'lastId{}limit20tabId{}19bc545a393a25177083d4a748807cc0'.format(lastId, tabId)
- X(废弃) https://app.poizon.com/api/v1/h5/index/fire/shoppingTab?tabId={}&limit=20&lastId={}&sign={}
- X(废弃)tabId 如 4 为球鞋品类, lastId 如 1 为翻页参数
- 商品详细数据页api的sign密匙构造:
- X(废弃) 'productId{}sourceshareDetail048a9c4943398714b356a696503d2d36'.format(product_id)
- X(废弃) https://m.poizon.com/mapi/product/detail?productId={}&source=shareDetail&sign={}
- X(废弃) 'productId{}productSourceNamewx19bc545a393a25177083d4a748807cc0'.format(productId)
- X(废弃) https://app.poizon.com/api/v1/h5/index/fire/flow/product/detail?productId={}&productSourceName=wx&sign={}
- 商品最近购买记录页api的sign密匙构造:
- X(废弃)'lastId{}limit20productId{}sourceAppapp19bc545a393a25177083d4a748807cc0'.format(lastId, productId)
- X(废弃)https://app.poizon.com/api/v1/h5/product/fire/recentSoldList?productId={}&lastId={}&limit=20&sourceApp=app&sign={}
- X(废弃)最近所有购买记录的翻页参数lastId,下一次翻页请带上次请求返回的lastId作为第二次翻页的页码参数,例如
```
X (废弃)
# 第一次的第一页为 0
recensales_list_url = get_recensales_list_url(0, 26850)
# 这个请求返回的json里面的lastId为下一次翻页的参数,所以下一次翻页请填写28879520,以此内推
recensales_list_response_text = requests.get(url=recensales_list_url, headers=headers).text
{"status":200,"data":{"lastId":28879520,"list":[{"avatar":"https://du.hupucdn.com/FhQbGz1UY2Znmn1GxpHedYSm-AWV? imageView2/2/w/50/h/50","userName":"留*V","sizeDesc":"42码","formatTime":"6分钟前","price":92900,"orderSubTypeId":5,"bizType":0}]},"msg":"成功"}
```
- 后期计划加入搜索接口,scrapy-redis分布式
### pyexejs代码例子
```
# import execjs
import requests
import hashlib
from urllib.parse import quote
# headers = {
# 'Host': "app.poizon.com",
# 'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)"
# " Chrome/53.0.2785.143 Safari/537.36 MicroMessenger/7.0.4.501 NetType/WIFI "
# "MiniProgramEnv/Windows WindowsWechat",
# 'appid': "wxapp",
# 'appversion': "4.4.0",
# 'content-type': "application/x-www-form-urlencoded",
# 'Accept-Encoding': "gzip, deflate",
# 'Accept': "*/*",
# }
#
#
# def get_recensales_list_url(lastId, productId):
# # 最近购买接口
# with open('sign.js', 'r', encoding='utf-8')as f:
# all_ = f.read()
# ctx = execjs.compile(all_)
# sign = ctx.call('getSign',
# 'lastId{}limit20productId{}sourceAppapp19bc545a393a25177083d4a748807cc0'.format(lastId,
# productId))
# recensales_list_url = 'https://app.poizon.com/api/v1/h5/product/fire/recentSoldList?' \
# 'productId={}&lastId={}&limit=20&sourceApp=app&sign={}'.format(productId, lastId, sign)
# return recensales_list_url
#
#
# def get_search_by_keywords_url(page, sortMode, sortType):
# # 关键词搜索商品接口
# with open('sign.js', 'r', encoding='utf-8')as f:
# all_ = f.read()
# ctx = execjs.compile(all_)
# # 53489
# sign = ctx.call('getSign',
# 'limit20page{}sortMode{}sortType{}titleajunionId19bc545a393a25177083d4a748807cc0'.format(page,
# sortMode,
# sortType))
# search_by_keywords_url = 'https://app.poizon.com/api/v1/h5/product/fire/search/list?title=aj&page={}&sortType={}&sortMode={}&' \
# 'limit=20&unionId=&sign={}'.format(page, sortType, sortMode, sign)
# return search_by_keywords_url
#
#
# def get_brand_list_url(lastId, tabId):
# # 商品品类列表接口
# with open('sign.js', 'r', encoding='utf-8')as f:
# all_ = f.read()
# ctx = execjs.compile(all_)
# sign = ctx.call('getSign',
# 'lastId{}limit20tabId{}19bc545a393a25177083d4a748807cc0'.format(lastId, tabId))
# brand_list_url = 'https://app.poizon.com/api/v1/h5/index/fire/shoppingTab?' \
# 'tabId={}&limit=20&lastId={}&sign={}'.format(tabId, lastId, sign)
# return brand_list_url
#
#
# def get_product_detail_url(productId):
# # 商品详情接口
# with open('sign.js', 'r', encoding='utf-8')as f:
# all_ = f.read()
# ctx = execjs.compile(all_)
# sign = ctx.call('getSign',
# 'productId{}productSourceNamewx19bc545a393a25177083d4a748807cc0'.format(productId))
# product_detail_url = 'https://app.poizon.com/api/v1/h5/index/fire/flow/product/detail?' \
# 'productId={}&productSourceName=wx&sign={}'.format(productId, sign)
# return product_detail_url
#
#
# recensales_list_url = get_recensales_list_url(0, 40755)
# search_by_keywords_url = get_search_by_keywords_url(0, 1, 0)
# brand_list_url = get_brand_list_url(1, 4)
# product_detail_url = get_product_detail_url(53489)
# recensales_list_response = requests.get(url=recensales_list_url, headers=headers)
# search_by_keywords_response = requests.get(url=search_by_keywords_url, headers=headers)
# brand_list_response = requests.get(url=brand_list_url, headers=headers)
# product_detail_response = requests.get(url=product_detail_url, headers=headers)
# print(recensales_list_response.text)
# print(search_by_keywords_response.text)
# print(brand_list_response.text)
# print(product_detail_response.text)
headers = {
'Host': "app.poizon.com",
'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)"
" Chrome/53.0.2785.143 Safari/537.36 MicroMessenger/7.0.4.501 NetType/WIFI "
"MiniProgramEnv/Windows WindowsWechat",
'appid': "wxapp",
'appversion': "4.4.0",
'content-type': "application/json",
'Accept-Encoding': "gzip, deflate, br",
'Accept': "*/*",
}
index_load_more_url = 'https://app.poizon.com/api/v1/h5/index/fire/index'
# {"sign":"5e22051c5156608a85b12d501d615c61","tabId":"","limit":20,"lastId":1}
recensales_load_more_url = 'https://app.poizon.com/api/v1/h5/commodity/fire/last-sold-list'
# {"sign":"f44e26eb08becbd16b7ed268d83b3b8c","spuId":"73803","limit":20,"lastId":"","sourceApp":"app"}
product_detail_url = 'https://app.poizon.com/api/v1/h5/index/fire/flow/product/detail'
# {"sign":"5721d19afd7a7891b627abb9ac385ab0","spuId":"49413","productSourceName":"","propertyValueId":"0"}
category = {"code":200,"msg":"success","data":{"list":[{"catId":0,"catName":"品牌"},{"catId":1,"catName":"系列"},{"catId":3,"catName":"球鞋"},{"catId":6,"catName":"潮搭"},{"catId":8,"catName":"手表"},{"catId":1000119,"catName":"配件"},{"catId":7,"catName":"潮玩"},{"catId":9,"catName":"数码"},{"catId":1000008,"catName":"家电"},{"catId":726,"catName":"箱包"},{"catId":587,"catName":"美妆"},{"catId":945,"catName":"家居"}]},"status":200}
doCategoryDetail = {"code":200,"msg":"success","data":{"list":[{"brand":{"goodsBrandId":144,"brandName":"Nike","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3724byte_94276b9b2c7361e9fa70da69894d2e91_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":13,"brandName":"Jordan","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3173byte_5c87bdf672c1b1858d994e281ce5f154_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":3,"brandName":"adidas","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24108byte_fc70f4c88211e100fe6c29a6f4a46a96_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":494,"brandName":"adidas originals","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24706byte_9802f5a4f25e6cd1b284a5b754cec4f0_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":439,"brandName":"Supreme","type":0,"logoUrl":"https://du.hupucdn.com/news_byte6426byte_c7ab640bf99963bfc2aff21ca4ff8322_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":79,"brandName":"GUCCI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25670byte_c9ca8b5347750651bebbf84dd7d12d01_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10359,"brandName":"Fear of God","type":0,"logoUrl":"https://du.hupucdn.com/news_byte17196byte_d5f7a627b65e90f6b850b613c14b54a2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10176,"brandName":"LOUIS VUITTON","type":0,"logoUrl":"https://du.hupucdn.com/news_byte33190byte_d14f21356f020e12a534b967be2bed77_w382h322.png"},"seriesList":[]},{"brand":{"goodsBrandId":1245,"brandName":"OFF-WHITE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte2408byte_4d329f274512ddb136989432292cdd3f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":45,"brandName":"THE NORTH FACE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte31268byte_e05935a55a37e7901640f7d09548499d_w151h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10082,"brandName":"FOG","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23329byte_3e247b5d598f7da36a1af24d08cb9ad8_w350h350.png"},"seriesList":[]},{"brand":{"goodsBrandId":10215,"brandName":"STONE ISLAND","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24302byte_570d51bb8c62233c3b52a9ffb05e5d74_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10223,"brandName":"HERMES","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30429byte_658ec36fbe99d2b3ae1e5a685ee1b20c_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10370,"brandName":"Jimmy Choo","type":0,"logoUrl":"https://du.hupucdn.com/news_byte18060byte_e664bd98b2a590c464e0e154b5f9ce53_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1310,"brandName":"Champion","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21403byte_6995f22e76a1a203f4f4dfd3ff43c21b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":176,"brandName":"Converse","type":0,"logoUrl":"https://du.hupucdn.com/news_byte8272byte_078b04a261c1bb1c868f1522c7ddcefc_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":2,"brandName":"Puma","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26564byte_a768870ae48f1a216dd756c2206c34b1_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":4,"brandName":"New Balance","type":0,"logoUrl":"https://du.hupucdn.com/news_byte6189byte_1cb7717a44b335651ad4656610142591_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":7,"brandName":"Under Armour","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21646byte_0bd049d8c27c8509f3166d68a388dfe9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":9,"brandName":"Vans","type":0,"logoUrl":"https://du.hupucdn.com/news_byte9507byte_49aaeb534cecab574949cf34b43da3a5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":33,"brandName":"李宁","type":0,"logoUrl":"https://du.hupucdn.com/news_byte7350byte_d60fa387aac42cb8c9b79700d720397d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":4981,"brandName":"JRs","type":0,"logoUrl":"https://du.hupucdn.com/news_byte5113byte_7a651984f882e48df46c67758d6934d2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10027,"brandName":"Dickies","type":0,"logoUrl":"https://du.hupucdn.com/news_byte37984byte_e8d6f32f6b17f736a422fac90c99d7e5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10113,"brandName":"ANTI SOCIAL SOCIAL CLUB","type":0,"logoUrl":"https://du.hupucdn.com/news_byte28476byte_863a194b02da977009144bd9f10dde1f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":843,"brandName":"CASIO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte4156byte_4d3d1a6e2beca7f700e1ac92ea6b2fdf_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10141,"brandName":"UNDEFEATED","type":0,"logoUrl":"https://du.hupucdn.com/news_byte16826byte_6d7166e0081d6b42619e54ca06900406_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10250,"brandName":"NINTENDO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte29039byte_b5b91acfeaf88ec0c76df08b08e6e5cd_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10001,"brandName":"Thrasher","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30750byte_446d35e36b912ad366d8f72a6a9cc5e4_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10021,"brandName":"Cav Empt","type":0,"logoUrl":"https://du.hupucdn.com/news_byte61774byte_7a5969f3694fc71f727c63e8bc3d95d5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10032,"brandName":"Burberry","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27771byte_9031f22329273c84170de8aa0f7d7c67_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10037,"brandName":"C2H4","type":0,"logoUrl":"https://du.hupucdn.com/news_byte42973byte_8681e3b6092a4dbf6938621cb28e75f4_w284h284.png"},"seriesList":[]},{"brand":{"goodsBrandId":10043,"brandName":"Mitchell & Ness","type":0,"logoUrl":"https://du.hupucdn.com/news_byte35794byte_37e1d65f0c9df1c61c217bded6435b9d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10046,"brandName":"Moncler","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27878byte_9066687c9718c1168f8846653018a935_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1860,"brandName":"THOM BROWNE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte7866byte_fc0d1f01af88a3425bb106fe21f720a9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10062,"brandName":"VLONE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte38175byte_da7a862bd765220eb9bb00efbf5cfab3_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10072,"brandName":"HUAWEI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte53433byte_8949768c520c73adfd0798c7416ff642_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10073,"brandName":"Canada Goose","type":0,"logoUrl":"https://du.hupucdn.com/news_byte40959byte_26901c3ba55661a1ea668d49c599e86b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1131,"brandName":"隐蔽者","type":0,"logoUrl":"https://du.hupucdn.com/news_byte28746byte_5a165d2728e81983d7bbd59739e56b97_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10095,"brandName":"FMACM","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21768byte_8480a963cd231f2ea4ec032753238cd9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10098,"brandName":"Onitsuka Tiger","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27415byte_63662423bde0d02cb8576ff47afb270d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10103,"brandName":"Guuka","type":0,"logoUrl":"https://du.hupucdn.com/news_byte29973byte_ef6eeef0535a0c9b2ade4fb6efc0aa06_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":634,"brandName":"A BATHING APE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte8375byte_cedf2c6ad46d2ac60f0c2f86cbccffcd_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10115,"brandName":"Mishkanyc","type":0,"logoUrl":"https://du.hupucdn.com/news_byte34270byte_f4816f6a78ea1e528144fadcaf671db6_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10119,"brandName":"AMBUSH","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22861byte_44dadd412d9c0b3c57f0d382f5554f5c_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10120,"brandName":"CDG Play","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30859byte_4b1af0b2a9bc9007a3f0c5385fea1f8d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10164,"brandName":"GOTNOFEARS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22932byte_c11b6cb93dc1f26ee98d44db6017c522_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10172,"brandName":"THE NORTH FACE PURPLE LABEL","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22025byte_ce1dc9be1690240e01f1365eedac1362_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10186,"brandName":"Subcrew","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23252byte_48b6f7f61cad6c18fb7adafe9696ef1f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10243,"brandName":"Aftermaths","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20974byte_9b01f95cba8e542a258ccc7f1ccf9647_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10248,"brandName":"Aape","type":0,"logoUrl":"https://du.hupucdn.com/news_byte34614byte_beb0b973078c0e17171edea6cd0c715d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10263,"brandName":"apm monaco","type":0,"logoUrl":"https://du.hupucdn.com/news_byte28592byte_384295707fe8076c1cc738402f3b928b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":6,"brandName":"Reebok","type":0,"logoUrl":"https://du.hupucdn.com/news_byte8192byte_cda902674ee7d4d4c51d32b834a76e7b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10264,"brandName":"DUEPLAY","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20052byte_41e361c6d2df6895d3b38dfdd9c2efa9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":577,"brandName":"PALACE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte8691byte_5577b630f2fd4fcb8d6f7f45071acc40_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10000,"brandName":"ROARINGWILD","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21733byte_bf73cc451933ed2392d31620c08f76d6_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1222,"brandName":"NOAH","type":0,"logoUrl":"https://du.hupucdn.com/news_byte33752byte_11638ecd79b8d3c7fd29b92dfb9f5f5b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10030,"brandName":"Carhartt WIP","type":0,"logoUrl":"https://du.hupucdn.com/news_byte9975byte_222768bbe7d7daffed18c85090de6153_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10298,"brandName":"BANU","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24263byte_c480365559a6a9808a69547bc8084579_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10357,"brandName":"EQUALIZER","type":0,"logoUrl":"https://du.hupucdn.com/news_byte18391byte_b300fb77b24a776296bc7a92873d1839_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10075,"brandName":"RIPNDIP","type":0,"logoUrl":"https://du.hupucdn.com/news_byte35669byte_5e1b7e4e57ee4568bfb9bf657c8146c5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10087,"brandName":"Stussy","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30589byte_c37b4863d6248dd92a588696c9e1dfe5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10092,"brandName":"NPC","type":0,"logoUrl":"https://du.hupucdn.com/news_byte5399byte_249e0a587b457f46e7e6bad9fd7234bc_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10118,"brandName":"Red Charcoal","type":0,"logoUrl":"https://du.hupucdn.com/news_byte19757byte_91817a270103c441138260ad9812f1d8_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10106,"brandName":"Dior","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21672byte_cfe86702e27c9870189b6ad6a7f795b8_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":65,"brandName":"Apple","type":0,"logoUrl":"https://du.hupucdn.com/news_byte1253byte_c8fcc08b731e30d4d1453c77bb4417d7_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10039,"brandName":"Randomevent","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22681byte_d422ea97ad63fe2718dc0a3208602adb_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10260,"brandName":"Swarovski","type":0,"logoUrl":"https://du.hupucdn.com/news_byte29555byte_630db5e96d66e6f5287c293cd78caf27_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10010,"brandName":"PRADA","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21018byte_f70b725f896e7d48d7cf5de27efb693a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10107,"brandName":"UNIQLO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22384byte_3363df740785c4f46ff7e9e60732d44c_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10173,"brandName":"HIPANDA","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30445byte_c66a18c65a8fed91a6790c51b4742f5a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10219,"brandName":"HUMAN MADE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte37800byte_decdf76555f22cb831ac23e08ab2018b_w150h150.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10348,"brandName":"PINKO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21540byte_a15095f6f394d948ae5ab220d8d1a122_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10349,"brandName":"ISSEY MIYAKE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21345byte_f151d2b6a79e5f9189d3edeb5febd68d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10094,"brandName":"HERON PRESTON","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21445byte_263c6af2c24fe8eb020f2fad8956aae6_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10217,"brandName":"HARSH AND CRUEL","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27648byte_68693ca8aa0d93a7bc4efacf7131a1d0_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10229,"brandName":"COACH","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22573byte_2d963bacc8403d3bff0f44edd04dab64_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10230,"brandName":"MICHAEL KORS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21530byte_ae96688098a3d529824fa5cb71bf3765_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10273,"brandName":"XLARGE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte38276byte_061c84c498eadd44eb11704a5420f785_w688h628.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10012,"brandName":"Balenciaga","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25051byte_c9098ab23afe7e5b2ebbe5bbe02cf20b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10097,"brandName":"New Era","type":0,"logoUrl":"https://du.hupucdn.com/news_byte12037byte_2650d81fe891a08f41ba8ef58f92e4c8_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10236,"brandName":"UNDER GARDEN","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21683byte_ebeac156f070e9f48e74e13567a908ad_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10216,"brandName":"Suamoment","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26349byte_999d4032e637fbccca3aaf6db95ef2ea_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":8,"brandName":"Asics","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3352byte_31e3a9553fef833c3004a84c4c016093_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10261,"brandName":"TIFFANY & CO.","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21293byte_6af55972221b24da968a1b9957080c1e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10045,"brandName":"CHANEL","type":0,"logoUrl":"https://du.hupucdn.com/news_byte32784byte_b947def32e782d20594230896ad2b342_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10302,"brandName":"alexander wang","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20702byte_4635ead9077fefadaf9875638452a339_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10347,"brandName":"MLB","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21412byte_ba8096a44748f1e896828a6d1196f571_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10191,"brandName":"BEASTER","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23781byte_319f230d9345cd5154b908631d2bb868_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10245,"brandName":"izzue","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21393byte_cf5140071b8824d433f4b32a51d49220_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10257,"brandName":"FIVE CM","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22092byte_3cd80b1e49ad6bd28e5e371327424532_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10262,"brandName":"Acne Studios","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26766byte_8e0cc00c1fccd1958d56b008370107cc_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":4984,"brandName":"得物","type":0,"logoUrl":"https://du.hupucdn.com/news_byte1528byte_5fd1d0d6bd3ff23d2b6c0d2933da1b8e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10235,"brandName":":CHOCOOLATE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21808byte_2418d1805f60850c961c31ea40982ed2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10096,"brandName":"PALM ANGELS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22102byte_3d85af8f2d0566d7a1620404f9432be5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1302,"brandName":"WTAPS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte5814byte_ce947464a6105c6aef7d7fb981aaa61e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":4983,"brandName":"NEIGHBORHOOD","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3804byte_316eb37426516d3cf8252fcbab6aa0cf_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10024,"brandName":"BANDAI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte41491byte_d575b1bce5ab5754ea2444c0bb415782_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":2389,"brandName":"LEGO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte33157byte_7e92ea9e2640626b9d52ce2a9fd2a75c_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10224,"brandName":"DANGEROUSPEOPLE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27331byte_fd65dfa6630e179a1bed93573a9b32cb_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10241,"brandName":"Acupuncture","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26897byte_26ec3a3b2f532353a635cff0752eb743_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10255,"brandName":"MITARBEITER(IN)","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21741byte_058a1342e063fbb9fd341a1f3aca48a6_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1318,"brandName":"FILA","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24479byte_79cb074cf3d73a420d75a435aee91fe2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10315,"brandName":"SANKUANZ","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23445byte_3e251bad0695be109b037d80b9908e2a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10221,"brandName":"*EVAE+MOB","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22561byte_e52e3571b938f7027d4ea5ce1c406cb8_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10226,"brandName":"BABAMA","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23766byte_8a0dcd76a4e7032a66209f40d0b6ec85_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10228,"brandName":"OMTO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23242byte_06cc9e12bd4efdf4d1885c401c224e10_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10237,"brandName":"OMT","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24518byte_6be637e798e477e8c2e9e7d502e59b25_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10244,"brandName":"VERAF CA","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22401byte_7c150e55199eb32e12bb29f01ed6c801_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":34,"brandName":"安踏","type":0,"logoUrl":"https://du.hupucdn.com/news_byte925102byte_5d9ca8cebc2286d70ef66f4f4a8f2983_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10210,"brandName":"CDG","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20964byte_54ad49c012c262b02805451b9462f481_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10369,"brandName":"× × DESIGN","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21649byte_f3502bde5ec295493f6b1ffff54fdad9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10017,"brandName":"PLACES+FACES","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22034byte_87c416321487ad3071b2e886690b6c83_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10020,"brandName":"VERSACE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte17169byte_7a363914e3e65ddcab341f2088451861_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10022,"brandName":"LONGINES","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3779byte_b43ce49900670dcd8855801cd4ecbc3e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10029,"brandName":"McQ","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20529byte_0a3c77e055b5ea67e5dd976e0ae15ef9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10031,"brandName":"Alpha Industries","type":0,"logoUrl":"https://du.hupucdn.com/news_byte10726byte_a964cf670ceeb6e83dd7dc74670d2d0e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10033,"brandName":"Hasbro","type":0,"logoUrl":"https://du.hupucdn.com/news_byte33084byte_6ee0b3af2fe9007bd43d7e43b2d9cbcd_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10048,"brandName":"Boy London","type":0,"logoUrl":"https://du.hupucdn.com/news_byte12032byte_dc8badd06954530bab52bee2dcd2281e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10053,"brandName":"MOSCHINO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3922byte_e6fed7cc9d76aaed983119b1f6ea4da2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10099,"brandName":"VEJA","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27380byte_01358f743e3668ee4f860cc03c6cee71_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10105,"brandName":"GAON","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24582byte_5dc995e735d926f10686a3b2e4f99ffe_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10110,"brandName":"EDCO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25568byte_4c07bb13aeb5e88d127f5f30b0582ed2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10111,"brandName":"FYP","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24609byte_694a64457745672bd4e7b657b6753993_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10125,"brandName":"OPPO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25629byte_7fb979a04f572beaa96b0583f4204748_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10130,"brandName":"Corade","type":0,"logoUrl":"https://du.hupucdn.com/news_byte5181byte_7a202830db26f4d93c565e2cc1e0af4d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10132,"brandName":"MostwantedLab","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21748byte_4c88547590072659de8e0731fef96a4f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10135,"brandName":"PRBLMS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22576byte_910768509dbfed23275fa7989753dffd_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10138,"brandName":"zippo","type":0,"logoUrl":"https://du.hupucdn.com/news_byte28711byte_8d40191080a1a21111d66ce2ee000e90_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10142,"brandName":"UNVESNO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte2826byte_fdb1b1046cae382cfb60cac11f9b281d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10150,"brandName":"vivo","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26242byte_b829524880093cc2099d470299889c89_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10246,"brandName":"HOKA ONE ONE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26308byte_1e82701c70c29b871b630adb45dbebd3_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10247,"brandName":"KEEN","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30591byte_670bf1d37ce8479f3fa09a55d28dcb93_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10002,"brandName":"Y-3","type":0,"logoUrl":"https://du.hupucdn.com/news_byte29394byte_2f32b853acb651e2831f8797fe29fbfa_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10267,"brandName":"LOCCITANE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25047byte_c8480c6f2b9a32fc3a54524146bd1165_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10013,"brandName":"Neil Barrett","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3026byte_cdeab7bf75187f17fa8c069c9a3a051a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10269,"brandName":"Charlotte Tilbury ","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23124byte_0113ad2025780a77a7f5131571ecee54_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10014,"brandName":"KENZO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte7359byte_c71c56a9aea36427f7bb106f18151548_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10270,"brandName":"BVLGARI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22644byte_9dfd426ffa4b796999f49447b6a67f13_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10271,"brandName":"Jo Malone London","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22269byte_51fd61cb797b6c2bbf0c0b84981c0948_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10016,"brandName":"Vetements","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24790byte_b66758a5ea903a5a005798f7d84d1498_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10274,"brandName":"GIORGIO ARMANI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21786byte_2857737687e9338787e5121b81e2fe27_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10019,"brandName":"Givenchy","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23368byte_51dcde3cd1f5ef90c5734249bcd17af0_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10275,"brandName":"GUERLAIN","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21952byte_b06c9655af8bbd85bbacd7905c856d99_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10276,"brandName":"Fresh","type":0,"logoUrl":"https://du.hupucdn.com/news_byte29422byte_c362cefc52c99839bbf299bef133e165_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10277,"brandName":"Clé de Peau Beauté","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24675byte_c3c1c3b08d7926e82fd1681f920a955f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10278,"brandName":"LANCOME","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22324byte_41fe4c8042bfd4761df3cbde50eb1ab0_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10023,"brandName":"FENDI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte3735byte_363b11ad1f4b34f6b165818fe14ded88_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10279,"brandName":"Kiehls","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22486byte_39f44b549295514934db3bea8696551a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10280,"brandName":"MAC","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20822byte_0d14fc90e743b48c0d198505ac7acbbd_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10026,"brandName":"Yves Saint Laurent","type":0,"logoUrl":"https://du.hupucdn.com/news_byte29430byte_88593ab0095ed38806e07d4afd4889cf_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10282,"brandName":"LA MER","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21643byte_c2302d6ae27f2fa9569a224a49da3097_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10284,"brandName":"CLARINS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23308byte_a69c3dfc25e7a7c6400ffe44d8242a30_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10285,"brandName":"NARS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25821byte_311c624a310303b590d5d4c062f056ea_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10286,"brandName":"benefit","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25499byte_f2dadfa1b21c81b9e659300bf07e8d37_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10287,"brandName":"GLAMGLOW","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22830byte_eb7b37c9e623fc1b2ee13c291e1a29aa_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10288,"brandName":"Too Faced","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22156byte_47dd9a1fe0969e1648e912ee16cbb844_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10289,"brandName":"URBAN DECAY","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21218byte_dc1d505013e356a3479ad5295b6f1e75_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10290,"brandName":"FOREO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20670byte_924d6ad6b63fda5c7dae0c77b6e55a3f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10035,"brandName":"Dyson","type":0,"logoUrl":"https://du.hupucdn.com/news_byte28101byte_18fa7633c4e27f221c840813d784080f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10291,"brandName":"Christian Louboutin","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24594byte_d45524f35597ed91bbd7544f9e652172_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10293,"brandName":"xVESSEL","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23132byte_cc28fa998fb6243eb71054f6c2135db9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10044,"brandName":"TISSOT","type":0,"logoUrl":"https://du.hupucdn.com/news_byte31626byte_d4191837d926ec7bbd1e29c5fe46e595_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10300,"brandName":"PRIME 1 STUDIO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21836byte_44d1b5dc422073743a3c11647a8409a3_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10047,"brandName":"MCM","type":0,"logoUrl":"https://du.hupucdn.com/news_byte50051byte_3ed4a130ff4dc02151428c3e50978942_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10310,"brandName":"acme de la vie","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22262byte_a8a932aaedbab86b34249828b7ed32f8_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10311,"brandName":"BE@RBRICK","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23466byte_2e8da4a1f054a8e4682594a45612814e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10065,"brandName":"Timberland","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25895byte_88b2058d896df08f73cd94a47d2310f2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1000018,"brandName":"RAF SIMONS","type":0,"logoUrl":"https://du.hupucdn.com/FkYorY5yQT4Q4E66tjPrzETZ7R-p"},"seriesList":[]},{"brand":{"goodsBrandId":10068,"brandName":"PANERAI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte32912byte_36220760228f319bb75f52330c7e4b3e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1000020,"brandName":"正负零","type":0,"logoUrl":"https://du.hupucdn.com/FsrWiDmNZYaV1MjkRu0KMGU065zO"},"seriesList":[]},{"brand":{"goodsBrandId":10069,"brandName":"MIDO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26922byte_be1c2be794cb6d454620f4692647e268_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10070,"brandName":"Dupont","type":0,"logoUrl":"https://du.hupucdn.com/news_byte28213byte_3d762e61d0d0ab8dec7d7dd92fe1dc99_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10071,"brandName":"kindle","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27862byte_be5d55c9c358e569b89fbf8e12fc20a4_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10331,"brandName":"WHY PLAY","type":0,"logoUrl":"https://du.hupucdn.com/news_byte28380byte_363531321a5e823c56cc2f933f3be497_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10078,"brandName":"Logitech","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21472byte_ce700f9b3ca84b7a4a5f308f82bae04e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10341,"brandName":"anello","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25573byte_8080b45548c4b17bc976f8797ff158d5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1000045,"brandName":"DMCkal","type":0,"logoUrl":"https://du.hupucdn.com/FtcBWuRAZXH8rxoRoJSQDhQID6sT"},"seriesList":[]},{"brand":{"goodsBrandId":10350,"brandName":"A-COLD-WALL*","type":0,"logoUrl":"https://du.hupucdn.com/news_byte17623byte_ebcbe70f089dabe23e93588dc6ac66a3_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10354,"brandName":"CONKLAB","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21404byte_7fb50713eb0f986354735b304d5be896_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10101,"brandName":"GENANX/闪电","type":0,"logoUrl":"https://du.hupucdn.com/news_byte33425byte_c27fb3124848c48aee19c85441352048_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10358,"brandName":"FOOT INDUSTRY","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20338byte_4e3b752398bf7ff9e16d9fe37e4ecee9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10360,"brandName":"BONELESS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte17690byte_8c346bdce381e0cf63c76f187a4fe042_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10361,"brandName":"umamiism","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22545byte_34df2f448a017f7fdb3e570606c241b9_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10362,"brandName":"华人青年","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22601byte_f52dc5ab9f4452f97babc47821d24021_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10363,"brandName":"FUNKMASTERS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21027byte_1576a0f677e9cacd8762be6db99d4c78_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":1000061,"brandName":"GUESS","type":0,"logoUrl":"https://du.hupucdn.com/Fi6tDRWTi5rnQiW-ZB5BB-FbPoZN"},"seriesList":[]},{"brand":{"goodsBrandId":1000062,"brandName":"Needles","type":0,"logoUrl":"https://du.hupucdn.com/FiwdfFP76yqpag1r50r1qVG-TXad"},"seriesList":[]},{"brand":{"goodsBrandId":10368,"brandName":"Subtle","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21987byte_4b242f882bf5df63b2da5eae632fa29c_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10116,"brandName":"EMPORIO ARMANI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22342byte_7060a8a9b7c6b335143673f5417a1944_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10124,"brandName":"AMONSTER","type":0,"logoUrl":"https://du.hupucdn.com/FjZYBGDKtsc_9n8ftq28XsMRxZxB"},"seriesList":[]},{"brand":{"goodsBrandId":10381,"brandName":"PCMY","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21692byte_205236932bf037dab9965dd2be87085e_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10127,"brandName":"Rothco","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24098byte_12c50dcd1c1044fa4a6335bedd98e7d4_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10131,"brandName":"THE WIZ","type":0,"logoUrl":"https://du.hupucdn.com/news_byte6823byte_6a92589a3c42dcb6a1f3e849966daf53_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10388,"brandName":"TSMLXLT","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20983byte_d18f9145f160a5dd3b5d0ca45acec510_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10390,"brandName":"TRICKCOO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25337byte_823a55d69768b65e728f896971a0185d_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10391,"brandName":"NOCAO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22420byte_91b1ed0e11d4948a817f5ae35a0bfd99_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10136,"brandName":"PROS BY CH","type":0,"logoUrl":"https://du.hupucdn.com/news_byte6186byte_323eba7633b20d61d566dbc0bdb83f13_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10137,"brandName":"OXY","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24524byte_535ab7db658c3f1c8e00a821e5587585_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10394,"brandName":"FLOAT","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25774byte_07be4b895302ee06cbd9ad53aa017ed5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10143,"brandName":"Suicoke","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22613byte_16cd0ea92dde2aea19b76b46491814bb_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10146,"brandName":"GARMIN","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20845byte_5af278c31aad1b5e0982544840c2df96_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10151,"brandName":"BANPRESTO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte56132byte_ee71a1bac952dac4cd3f59b4c6673203_w800h800.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10153,"brandName":"Harman/Kardon","type":0,"logoUrl":"https://du.hupucdn.com/news_byte17446byte_134385ee63bf9d12cd7f98e27344310e_w150h150.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10413,"brandName":"OSCill","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23787byte_88a0a8c1b72cd8693957af2a53b89bd5_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10414,"brandName":"LIFEGOESON","type":0,"logoUrl":"https://du.hupucdn.com/FgHxeSwXg9SKBtRNrNNKjn-O8yxf"},"seriesList":[]},{"brand":{"goodsBrandId":10161,"brandName":"PSO Brand","type":0,"logoUrl":"https://du.hupucdn.com/news_byte47498byte_0bb2a47154dfa0fc914e98cfeae6c407_w1000h1000.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10167,"brandName":"EVISU","type":0,"logoUrl":"https://du.hupucdn.com/news_byte8745byte_0ba5f52e059d3e91803a05843c2b22e2_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10423,"brandName":"Maison Margiela","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22857byte_d8df7145e944cd7b203d3f5520b06c43_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10168,"brandName":"INXX","type":0,"logoUrl":"https://du.hupucdn.com/news_byte20442byte_4c50ce6dca7408dec92df78b72106e46_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10169,"brandName":"B&O","type":0,"logoUrl":"https://du.hupucdn.com/news_byte36636byte_b32fd6036ba60c4f868b197ada8b8c6f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10425,"brandName":"Charlie Luciano","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21051byte_28faf07c5d5a078a6fc18357413ce7c3_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10170,"brandName":"CITIZEN","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21252byte_1158debba23c31ccfed657aa8ce762bd_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10171,"brandName":"LOFREE","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22408byte_1af23b04a9bc352bfff21edb63514d39_w150h150.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10429,"brandName":"Arcteryx","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24934byte_5ebce6f644ee8ebdbc89f04a068fc1af_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10174,"brandName":"DAMTOYS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte31687byte_84591020e1a16ce89318585a5d84e9fc_w150h150.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10175,"brandName":"POP MART","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24981byte_909c5a578874dac97c6d3796be69cdf3_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10179,"brandName":"野兽王国","type":0,"logoUrl":"https://du.hupucdn.com/news_byte36849byte_5af274d0628f77ca06994e30ed50d5c4_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10183,"brandName":"ALIENWARE/外星人","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25149byte_458a868f82f2ac201e5d2f56fc087d60_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10184,"brandName":"Herschel","type":0,"logoUrl":"https://du.hupucdn.com/news_byte34213byte_403f7a96cd33f1cfdb12769a7e870f65_w824h752.png"},"seriesList":[]},{"brand":{"goodsBrandId":10187,"brandName":"科大讯飞","type":0,"logoUrl":"https://du.hupucdn.com/news_byte25000byte_8351390b01275c1fcaa8f20a040b6dfe_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10443,"brandName":"chinism","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21910byte_d24daabd4b84bba6c37e22a42dd18602_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10188,"brandName":"韶音","type":0,"logoUrl":"https://du.hupucdn.com/news_byte6089byte_b5d736ec8a0c1269eceb94dcfa520c37_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10190,"brandName":"SAINT LAURENT","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21416byte_6836702d9d6f487e44b8922d8eaeb86b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10193,"brandName":"SPALDING","type":0,"logoUrl":"https://du.hupucdn.com/news_byte27011byte_12f13f15a9a198939f3d9b847dbdb214_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10195,"brandName":"Levis","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23207byte_ad584d3079f341b83325f4974b99e342_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10196,"brandName":"SENNHEISER","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21466byte_f597724d3c9eb3294441c608cb59e1fe_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10197,"brandName":"CASETIFY","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23539byte_3a85a808033ed920a29cac5fad902b6f_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10199,"brandName":"JMGO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26989byte_cdd8e059931aa405f55c5d2426862a6b_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10200,"brandName":"PHILIPS","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24239byte_5655ed4f287863630934c96aa823346a_w150h150.jpg"},"seriesList":[]},{"brand":{"goodsBrandId":10201,"brandName":"SEIKO","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22717byte_6bfa41469825cfd6c70f5e3080d5de6a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10202,"brandName":"GENANX","type":0,"logoUrl":"https://du.hupucdn.com/news_byte9514byte_77dee80e931dfd8b528f609c400951e4_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10204,"brandName":"TIANC","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21593byte_aa160ed7e50ca090d629490010346a9a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10205,"brandName":"Drew House","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30015byte_31ba151ba30e01ae6132ce9584b4e49a_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10206,"brandName":"小米/MI","type":0,"logoUrl":"https://du.hupucdn.com/news_byte23786byte_d04434a4af3b56e20758cbeaed4f4531_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10207,"brandName":"CALVIN KLEIN","type":0,"logoUrl":"https://du.hupucdn.com/news_byte24775byte_736593d1ee0995be050b1fbcec77bf46_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":2016,"brandName":"DanielWellington","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21711byte_f48145a65169f913b0139ff2e1811e78_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10211,"brandName":"飞智","type":0,"logoUrl":"https://du.hupucdn.com/news_byte26616byte_8001c14855f8528d7bccc31183bfaf75_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10213,"brandName":"TOM FORD","type":0,"logoUrl":"https://du.hupucdn.com/news_byte21248byte_2994883ac1627497e75e9b99ba327c48_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10485,"brandName":"RickyisClown","type":0,"logoUrl":"https://du.hupucdn.com/news_byte22583byte_98f823c8acb1459e6c72f5bdaf8a88b0_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10233,"brandName":"Dr.Martens","type":0,"logoUrl":"https://du.hupucdn.com/news_byte31892byte_8d31e2f10cc7fffe8d99545d757dfff6_w150h150.png"},"seriesList":[]},{"brand":{"goodsBrandId":10238,"brandName":"APPortfolio","type":0,"logoUrl":"https://du.hupucdn.com/news_byte30968byte_19dd10185fc4b82037621b3708a613e4_w150h150.png"},"seriesList":[]}]},"status":200}
def return_sign(raw_sign_code_str):
# md5原始sign的字符串
m = hashlib.md5()
m.update(raw_sign_code_str.encode("utf8"))
sign = m.hexdigest()
return sign
def index_load_data(lastId):
# 首页数据流
post_data = dict()
post_data['tabId'] = ''
post_data['limit'] = 20
post_data['lastId'] = lastId
post_data['sign'] = return_sign('lastId{}limit20tabId19bc545a393a25177083d4a748807cc0'.format(lastId))
post_data = str(post_data).encode('utf-8')
res_data = requests.post(index_load_more_url, headers=headers, data=post_data).text
print(res_data)
def recensales_load_more_data(spuId, lastId):
# 最近购买接口
post_data = dict()
post_data['limit'] = 20
post_data['spuId'] = spuId
post_data['lastId'] = lastId
post_data['sourceApp'] = 'app'
post_data['sign'] = return_sign('lastId{}limit20sourceAppappspuId{}19bc545a393a25177083d4a748807cc0'.format(lastId, spuId))
post_data = str(post_data).encode('utf-8')
res_data = requests.post(recensales_load_more_url, headers=headers, data=post_data).text
print(res_data)
def search_by_keywords_load_more_data(keywords, page, sortMode, sortType):
# 关键词搜索商品接口
sign = return_sign('limit20page{}showHot-1sortMode{}sortType{}title{}unionId19bc545a393a25177083d4a748807cc0'.format(page, sortMode, sortType, keywords))
print(sign)
url = 'https://app.poizon.com/api/v1/h5/search/fire/search/list?' \
'sign={}&title={}&page={}&sortType={}' \
'&sortMode={}&limit=20&showHot=-1&unionId='.format(sign, quote(keywords), page, sortType, sortMode)
res_data = requests.get(url, headers=headers).text
print(res_data)
def brand_list_load_more_data(page, sortType, sortMode, catId, unionId, title=''):
# 品牌列表页商品接口
sign = return_sign('catId{}limit20page{}showHot-1sortMode{}sortType{}title{}unionId{}19bc545a393a25177083d4a748807cc0'.format(
catId, page, sortMode, sortType, title, unionId
))
print(sign)
url = 'https://app.poizon.com/api/v1/h5/search/fire/search/list?' \
'sign={}&title={}&page={}&sortType={}' \
'&sortMode={}&limit=20&showHot=-1&catId={}&unionId={}'.format(sign, title, page, sortType, sortMode, catId, unionId)
res_data = requests.get(url, headers=headers).text
print(res_data)
def product_detail_data(spuId, propertyValueId, productSourceName=''):
# 商品详情页接口
post_data = dict()
post_data['spuId'] = spuId
post_data['productSourceName'] = productSourceName
post_data['propertyValueId'] = propertyValueId
post_data['sign'] = return_sign(
'productSourceName{}propertyValueId{}spuId{}19bc545a393a25177083d4a748807cc0'.format(productSourceName, propertyValueId, spuId))
post_data = str(post_data).encode('utf-8')
res_data = requests.post(product_detail_url, headers=headers, data=post_data).text
print(res_data)
if __name__ == '__main__':
index_load_data(1)
recensales_load_more_data(73803, '')
search_by_keywords_load_more_data('詹17', 0, 1, 1) # 2020年入了一双😍
brand_list_load_more_data(page=0, sortType=1, sortMode=1, catId=0, unionId=144) # nike系列
product_detail_data(spuId=49413, propertyValueId=0)
# 这五个api基本涵盖毒app的核心数据api
```
**交流:https://t.me/joinchat/M3PE4RbHgrZJKhMIgQLwoQ**<br>
<!-- <img src="https://github.com/luo1994/du-app-sign/blob/master/pic/%E8%B5%9E%E8%B5%8F.png" width="20%" height="20%"><br> -->
## 结果
**词云**
![Alt text](./pic/du_wordcloud.png)
**商品**
![Alt text](./pic/du_sku.png)
**商品详情**
![Alt text](./pic/du_detail.png)
>**(本项目仅用于学习用途,使用本项目的一切后果自负,如果毒app看到本项目觉得需要删除请联系邮箱)**。
<br>
| 228.286307 | 43,365 | 0.749259 | yue_Hant | 0.366863 |
f48a62947321d3a9bd332aac5b35baac10fb8796 | 980 | md | Markdown | docs/providers/rancher/README.md | claytonbrown/tf-cfn-provider | b338642692e91a959bba201c2a84be6bfc7c4bed | [
"MIT"
] | 25 | 2019-01-19T10:39:46.000Z | 2021-05-24T23:38:13.000Z | docs/providers/rancher/README.md | claytonbrown/tf-cfn-provider | b338642692e91a959bba201c2a84be6bfc7c4bed | [
"MIT"
] | null | null | null | docs/providers/rancher/README.md | claytonbrown/tf-cfn-provider | b338642692e91a959bba201c2a84be6bfc7c4bed | [
"MIT"
] | 3 | 2019-02-27T04:34:36.000Z | 2019-06-20T05:25:56.000Z | # Rancher Provider
## Configuration
To configure this resource, you must create an AWS Secrets Manager secret with the name **terraform/rancher** or add [template metadata](https://github.com/iann0036/tf-cfn-provider/blob/master/examples/metadata.yaml). The below arguments may be included as the key/value or JSON properties in the secret or metadata object:
* `api_url` - (Required) Rancher API url.
* `access_key` - (Optional) Rancher API access key.
* `secret_key` - (Optional) Rancher API access key.
## Supported Resources
* [Terraform::Rancher::Certificate](Certificate.md)
* [Terraform::Rancher::Environment](Environment.md)
* [Terraform::Rancher::Host](Host.md)
* [Terraform::Rancher::RegistrationToken](RegistrationToken.md)
* [Terraform::Rancher::RegistryCredential](RegistryCredential.md)
* [Terraform::Rancher::Registry](Registry.md)
* [Terraform::Rancher::Secrets](Secrets.md)
* [Terraform::Rancher::Stack](Stack.md)
* [Terraform::Rancher::Volumes](Volumes.md) | 44.545455 | 323 | 0.761224 | eng_Latn | 0.382723 |
f48a94a2f6a1dc03e3e5128038499390d36de23e | 3,636 | md | Markdown | docs/reporting-services/report-design/multiple-series-on-a-chart-report-builder-and-ssrs.md | eltociear/sql-docs.zh-tw | fa732fae4448efa4ca76e309bc6d1f8799069270 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/reporting-services/report-design/multiple-series-on-a-chart-report-builder-and-ssrs.md | eltociear/sql-docs.zh-tw | fa732fae4448efa4ca76e309bc6d1f8799069270 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/reporting-services/report-design/multiple-series-on-a-chart-report-builder-and-ssrs.md | eltociear/sql-docs.zh-tw | fa732fae4448efa4ca76e309bc6d1f8799069270 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 圖表上的多個數列 (報表產生器及 SSRS) | Microsoft Docs
ms.date: 03/07/2017
ms.prod: reporting-services
ms.prod_service: reporting-services-native
ms.technology: report-design
ms.topic: conceptual
ms.assetid: b99e4398-1fba-4824-958f-5c75d10485ea
author: maggiesMSFT
ms.author: maggies
ms.openlocfilehash: 8aedfd50c591f3a8aef4855854eed760ce093a7a
ms.sourcegitcommit: b78f7ab9281f570b87f96991ebd9a095812cc546
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 01/31/2020
ms.locfileid: "65580631"
---
# <a name="multiple-series-on-a-chart-report-builder-and-ssrs"></a>圖表上的多個數列 (報表產生器及 SSRS)
當圖表上出現多個數列時,您必須決定比較數列的最好方式。 您可以使用堆疊圖表顯示每個數列的相關比例。 如果您只要比較共用共同類別目錄 (x) 軸的兩個數列,請使用副座標軸。 顯示兩個相關資料數列 (例如,價格和數量,或收入和稅額) 時,這相當實用。 如果圖表變成無法讀取,請考慮使用多個圖表區域,在每個數列之間建立更多視覺上的分隔。
除了使用圖表功能之外,決定哪種圖表類型應該用於您的資料也相當重要。 如果資料集中的欄位是相關的,請考慮使用範圍圖表。
> [!NOTE]
> [!INCLUDE[ssRBRDDup](../../includes/ssrbrddup-md.md)]
## <a name="using-stacked-and-100-stacked-charts"></a>使用堆疊與 100% 堆疊圖表
堆疊圖表通常用於顯示一個圖表區域中的多個數列。 當您嘗試顯示的資料緊密相關時,請考慮使用堆疊圖表。 在堆疊圖表上顯示四個 (包含) 以下的數列也是相當好的作法。 如果您要比較每個數列佔整體的比例,使用 100% 堆疊區域、長條圖或直條圖。 這些圖表會計算每個數列佔類別目錄的相對百分比。 如需詳細資訊,請參閱[區域圖 (報表產生器及 SSRS)](../../reporting-services/report-design/area-charts-report-builder-and-ssrs.md)、[橫條圖 (報表產生器及 SSRS)](../../reporting-services/report-design/bar-charts-report-builder-and-ssrs.md) 和[直條圖 (報表產生器及 SSRS)](../../reporting-services/report-design/column-charts-report-builder-and-ssrs.md)。
## <a name="using-the-secondary-axis"></a>使用副座標軸
當新的數列加入到圖表時,該數列會使用主要 x 和 y 軸繪製。 當您想要比較屬於不同測量單位的值時,請考慮使用 *「副座標軸」* (Secondary Axis),讓您可以在個別的軸上繪製兩個數列。 在比較屬於不同測量單位的值時,副座標軸相當實用。 副座標軸繪製在主座標軸的另一側。 此圖表僅支援主座標軸和副座標軸。 副座標軸與主座標軸的屬性相同。 如需詳細資訊,請參閱[繪製副座標軸上的資料 (報表產生器及 SSRS)](../../reporting-services/report-design/plot-data-on-a-secondary-axis-report-builder-and-ssrs.md)。
如果您想要顯示兩個以上具有不同資料範圍的數列,請考慮將數列放在個別的圖表區域中。
## <a name="using-chart-areas"></a>使用圖表區域
此圖表是最上層的容器,其中包含外框、圖表標題和圖例。 依預設,圖表包含一個預設的圖表區域。 在圖表介面上看不到圖表區域,但是您可以將圖表區域視為僅包含軸標籤、軸標題,以及一或多個數列之繪圖區的容器。 下圖顯示單一圖表內圖表區域的概念。
![顯示圖表區域的圖表](../../reporting-services/report-design/media/chartareasdiagram.gif "顯示圖表區域的圖表")
您可以使用 **[圖表區域屬性]** 對話方塊,指定包含在圖表區域中所有數列的 2D 和 3D 方向、對齊相同圖表內的多個圖表區域,以及格式化繪圖區的色彩。 在僅包含一個預設圖表區域的圖表上定義新的圖表區域時,圖表區域的可用空間會以水平方式分成兩個,而新的圖表區域放置在第一個圖表區域之下。
每個數列都只能連接到一個圖表區域。 依預設,所有數列都會加入到預設的圖表區域中。 使用區域圖、直條圖、折線圖和散佈圖時,這些數列的任何組合都可以顯示在相同的圖表區域中。 例如,您可以在相同的圖表區域中顯示資料行數列和線條數列。 針對多個數列使用相同圖表區域的優點是,使用者可以輕鬆進行比較。
長條圖、雷達圖和形狀圖無法與相同圖表區域中的其他任何圖表類型結合。 如果您要比較屬於長條圖、雷達圖或形狀圖類型的多個數列,您需要執行下列其中之一:
- 將圖表區域中的所有數列變更為相同的圖表類型。
- 建立新的圖表區域,然後將一或多個數列從預設的圖表區域移到新建立的圖表區域中。
如果您要嘗試比較具有不同值刻度的資料,單一圖表上的多個圖表區域功能也相當實用。 例如,如果第一個數列在 10 到 20 的範圍內包含資料,而第二個數列在 400 到 800 的範圍內包含資料,則第一個數列中的值可能會遭到遮蓋。 請考慮將每個數列分隔到不同的圖表區域中。 如需詳細資訊,請參閱[為數列指定圖表區域 (報表產生器及 SSRS)](../../reporting-services/report-design/specify-a-chart-area-for-a-series-report-builder-and-ssrs.md)。
## <a name="using-range-charts"></a>使用範圍圖表
範圍圖表的每個資料點都有兩個值。 如果您的圖表包含共用相同類別目錄 (x) 軸的兩個數列,您可以使用範圍圖表顯示兩個數列間的差距。 範圍圖表最適合用於顯示高-低或上-下資訊。 例如,如果第一個數列包含一月每天的最高銷售額,而第二個數列包含一月每天的最低銷售額,則您可以使用範圍圖表顯示每天銷售額中,最高與最低間的差距。 如需詳細資訊,請參閱[範圍圖表 (報表產生器及 SSRS)](../../reporting-services/report-design/range-charts-report-builder-and-ssrs.md)。
## <a name="see-also"></a>另請參閱
[圖表 (報表產生器及 SSRS)](../../reporting-services/report-design/charts-report-builder-and-ssrs.md)
[將包含多個資料範圍的數列顯示在圖表上 (報表產生器及 SSRS)](../../reporting-services/report-design/displaying-a-series-with-multiple-data-ranges-on-a-chart.md)
[圖表類型 (報表產生器及 SSRS)](../../reporting-services/report-design/chart-types-report-builder-and-ssrs.md)
| 60.6 | 480 | 0.771727 | yue_Hant | 0.930154 |
f48ab666f3d50cf25ec4dd554f63e8ca3b289910 | 48,061 | md | Markdown | sampleemps_data_modeling/README.md | chazkiker2/java-sampleemps | f63f148337235d007877323b51a138e8684df9d2 | [
"MIT"
] | 1 | 2021-12-27T16:09:14.000Z | 2021-12-27T16:09:14.000Z | sampleemps_data_modeling/README.md | bloominstituteoftechnology/java-sampleemps | c02cccec8b06e4f9041dfe4424086590b1bb3fac | [
"MIT"
] | null | null | null | sampleemps_data_modeling/README.md | bloominstituteoftechnology/java-sampleemps | c02cccec8b06e4f9041dfe4424086590b1bb3fac | [
"MIT"
] | 18 | 2020-06-09T15:13:27.000Z | 2021-04-18T23:25:18.000Z | # Java Sample Employee Data Modeling with Audit Fields
## Introduction
This project is used to introduce Java Spring REST API CRUD applications. As such it is a small application showing just the code that is needed to explain the topic.
## Database layout
The table layouts are as follows:
- Employee is the driving table
- Email has a Many-To-One relationship with Employee. Each employee has many emails. Each email has only one employee.
- Jobtitles has a Many-To-Many relationship with Employee
- EmployeeTitles the join table between Jobtitles and Employee
![Image of Database Layout](sampleemps-audit-db.png)
Using the provided seed data, the given endpoint will produce the stated output. Expand each endpoint to see it's correct output. Note I am only generated 3 random Java Faker Employees
<details>
<summary>http://localhost:2019/employees/employees</summary>
```JSON
[
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 3,
"name": "CINNAMON",
"salary": 80000.0,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 2,
"title": "Wizard"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 4,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 5,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 6,
"name": "BARNBARN",
"salary": 80000.0,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 7,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 8,
"name": "JOHN",
"salary": 75000.0,
"jobnames": [],
"emails": []
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 9,
"name": "Penney Wiegand",
"salary": 104522.50369722086,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": []
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 10,
"name": "Karisa Hagenes",
"salary": 143898.7951833187,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 11,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 12,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
}
]
```
</details>
<details>
<summary>http://localhost:2019/employees/employeename/mon</summary>
```JSON
[
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 3,
"name": "CINNAMON",
"salary": 80000.0,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 2,
"title": "Wizard"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 4,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 5,
"email": "[email protected]"
}
]
}
]
```
</details>
<details>
<summary>http://localhost:2019/employees/employeeemail/com</summary>
```JSON
[
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 3,
"name": "CINNAMON",
"salary": 80000.0,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 2,
"title": "Wizard"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 4,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 5,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 6,
"name": "BARNBARN",
"salary": 80000.0,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 7,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 10,
"name": "Karisa Hagenes",
"salary": 143898.7951833187,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 11,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 12,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 10,
"name": "Karisa Hagenes",
"salary": 143898.7951833187,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 11,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 12,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"employeeid": 13,
"name": "Phillis Bechtelar",
"salary": 119107.99038025294,
"jobnames": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobname": {
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"jobtitleid": 1,
"title": "Big Boss"
},
"manager": "Stumps"
}
],
"emails": [
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 14,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 15,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 16,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 17,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 18,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 19,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 20,
"email": "[email protected]"
},
{
"createdBy": "SYSTEM",
"createdDate": "2020-07-20 18:51:24",
"lastModifiedBy": "SYSTEM",
"lastModifiedDate": "2020-07-20 18:51:24",
"emailid": 21,
"email": "[email protected]"
}
]
}
]
```
</details>
<details>
<summary>POST http://localhost:2019/employees/employee</summary>
DATA
```JSON
{
"name": "Mojo",
"salary": 100000.00,
"jobnames": [
{
"jobname":{
"jobtitleid": 2,
"title": "Wizard"
},
"manager":"stumps"
}
],
"emails": [
{
"email": "[email protected]"
},
{
"email": "[email protected]"
}
]
}
```
OUTPUT
```TEXT
Location Header: http://localhost:2019/employees/employee/22
STATUS 201 Created
```
</details>
<details>
<summary>PUT http://localhost:2019/employees/employee/13</summary>
DATA
```JSON
{
"name": "Corgie",
"salary": 80000.00,
"jobnames": [
{
"jobname":{
"jobtitleid": 2,
"title": "Wizard"
},
"manager":"stumps"
},
{
"jobname":{
"jobtitleid": 1,
"title": "Big Boss"
},
"manager":"stumps"
}
],
"emails": [
{
"email": "[email protected]"
},
{
"email": "[email protected]"
}
]
}
```
OUTPUT
```TEXT
STATUS 200 OK
```
</details>
<details>
<summary>PATCH http://localhost:2019/employees/employee/16</summary>
DATA
```JSON
{
"salary": 100000.00
}
```
OUTPUT
```TEXT
STATUS 200 OK
```
</details>
<details>
<summary>DELETE http://localhost:2019/employees/employee/16</summary>
OUTPUT
```TEXT
STATUS 200 OK
```
</details>
[Sample Swagger Documentation](https://drive.google.com/file/d/1EijscrIhpv6lbnSbLXcaH7NeLFCEpI2l/view) | 34.085816 | 184 | 0.415056 | yue_Hant | 0.127022 |
f48ae235a5f8d705082b2f57b76276540ea07a0c | 2,636 | md | Markdown | protocol-tutorials/perform-an-instant-distribution.md | superfluid-finance/superfluid-protocol-docs | 22e0784e2c448ebcbc6ce1d5d6a4f919b4f304e1 | [
"0BSD"
] | 15 | 2020-09-28T18:37:32.000Z | 2021-08-12T16:09:01.000Z | protocol-tutorials/perform-an-instant-distribution.md | superfluid-finance/superfluid-protocol-docs | 22e0784e2c448ebcbc6ce1d5d6a4f919b4f304e1 | [
"0BSD"
] | 8 | 2020-11-09T08:59:55.000Z | 2021-08-06T07:49:13.000Z | protocol-tutorials/perform-an-instant-distribution.md | superfluid-finance/superfluid-protocol-docs | 22e0784e2c448ebcbc6ce1d5d6a4f919b4f304e1 | [
"0BSD"
] | 14 | 2020-09-28T20:16:47.000Z | 2021-08-18T12:28:21.000Z | ---
description: Create a pool and distribute tokens using @superfluid-financ/js-sdk
---
# 💰 Instant Distribution
## Introduction
Let's take a look at another Superfluid Agreement. The **Instant Distribution Agreement (IDA)** can be used to make one-time-payments to multiple recipients. The IDA could be used for revenue sharing or airdrops. It consists of a **Publishing Index** with `indexId`, an `indexValue`, and one or more **subscribers**.
Here is an quick glimpse at the IDA contract we'll use in this tutorial. Don't worry, you don't need to memorize this.
```javascript
contract IInstantDistributionAgreementV1 is ISuperAgreement {
function createIndex(
ISuperToken token,
uint32 indexId,
bytes calldata ctx)
external
virtual
returns(bytes memory newCtx);
function updateIndex(
ISuperToken token,
uint32 indexId,
uint128 indexValue,
bytes calldata ctx)
external
virtual
returns(bytes memory newCtx);
```
## Prerequisites
Before starting this tutorial you should:
* Complete the [@superfluid-finance/js-sdk](frontend-+-nodejs.md) tutorial
* Have some goerli ETH and tokens in your wallet from the dashboard [https://app.superfluid.finance](https://app.superfluid.finance)
## Create a Pool
The first step to using an IDA is to create a **Publishing Index**. Once created, the Publishing Index can be used to send tokens at any point in the future. To make things easier, we will just call this `poolId`
Lets create a new **pool:**
```javascript
await alice.createPool({ poolId: 1 });
```
Now that we have our Plublishing Index / pool, we can assign it some **Subscribers**. We do this by _giving shares_ to other people like this:
```javascript
await alice.giveShares({ poolId: 1, recipient: bob, shares: 90 });
await alice.giveShares({ poolId: 1, recipient: carol, shares: 10 });
```
And finally we can _distribute funds_ to everyone in the pool, based on the amount of shares they have.
```javascript
await alice.distributeToPool({ poolId: 1, amount: 1000 });
```
In this example, bob will receive 90% of the tokens alice sent, while carol only receives 10%. Bob will have 900 and carol will have 100.
Thats it! One thing to pay attention - for a recipient's balance to reflect the distribution event, they should first call `approveSubscription` one time. If they fail to do this, don't worry, they can still receive their tokens after calling the `claim` function at any time.
⚔ Look at you now, with two agreements at your disposal. You can duel-wield! Its time to learn more about Super Tokens.
| 38.764706 | 316 | 0.730653 | eng_Latn | 0.989249 |
f48b1c61278b039b8d6d066a1957a25d77af8567 | 2,799 | md | Markdown | wdk-ddi-src/content/wdm/nf-wdm-iovalidatedeviceiocontrolaccess.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/wdm/nf-wdm-iovalidatedeviceiocontrolaccess.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/wdm/nf-wdm-iovalidatedeviceiocontrolaccess.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:wdm.IoValidateDeviceIoControlAccess
title: IoValidateDeviceIoControlAccess function (wdm.h)
description: For more information, see the WdmlibIoValidateDeviceIoControlAccess function.
old-location: kernel\iovalidatedeviceiocontrolaccess.htm
tech.root: kernel
ms.assetid: 45e8279f-b7a5-4b45-92b7-5f740f6c1117
ms.date: 04/30/2018
keywords: ["IoValidateDeviceIoControlAccess function"]
ms.keywords: IoValidateDeviceIoControlAccess, IoValidateDeviceIoControlAccess routine [Kernel-Mode Driver Architecture], k104_724cb845-fabf-4b5a-8712-901829f1f79d.xml, kernel.iovalidatedeviceiocontrolaccess, wdm/IoValidateDeviceIoControlAccess
f1_keywords:
- "wdm/IoValidateDeviceIoControlAccess"
- "IoValidateDeviceIoControlAccess"
req.header: wdm.h
req.include-header: Wdm.h, Ntddk.h, Ntifs.h
req.target-type: Universal
req.target-min-winverclnt: Available in Windows Server 2003 and later versions of Windows. Drivers that must also work for Windows 2000 and Windows XP can instead link to Wdmsec.lib to use this routine. (The Wdmsec.lib library first shipped with the Windows XP Service Pack 1 [SP1] and Windows Server 2003 editions of the Driver Development Kit [DDK] and now ships with the Windows Driver Kit [WDK].)
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: NtosKrnl.lib
req.dll: NtosKrnl.exe
req.irql: Any level
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- NtosKrnl.exe
api_name:
- IoValidateDeviceIoControlAccess
targetos: Windows
req.typenames:
---
# IoValidateDeviceIoControlAccess function
## -description
For more information, see the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdmsec/nf-wdmsec-wdmlibiovalidatedeviceiocontrolaccess">WdmlibIoValidateDeviceIoControlAccess</a> function.
## -parameters
### -param Irp [in]
For more information, see the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdmsec/nf-wdmsec-wdmlibiovalidatedeviceiocontrolaccess">WdmlibIoValidateDeviceIoControlAccess</a> function.
### -param RequiredAccess [in]
For more information, see the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdmsec/nf-wdmsec-wdmlibiovalidatedeviceiocontrolaccess">WdmlibIoValidateDeviceIoControlAccess</a> function.
## -returns
For more information, see the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdmsec/nf-wdmsec-wdmlibiovalidatedeviceiocontrolaccess">WdmlibIoValidateDeviceIoControlAccess</a> function.
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/ns-wdm-_irp">IRP</a>
| 32.172414 | 401 | 0.779207 | yue_Hant | 0.457462 |
f48b8ab642494246a502e0be723225b0c2c4d4b9 | 9,459 | md | Markdown | README.md | izumiya-keisuke/torchbeast | f2d11aec7991bcbf3faeb15ba84e9be0fa001c0a | [
"Apache-2.0"
] | null | null | null | README.md | izumiya-keisuke/torchbeast | f2d11aec7991bcbf3faeb15ba84e9be0fa001c0a | [
"Apache-2.0"
] | null | null | null | README.md | izumiya-keisuke/torchbeast | f2d11aec7991bcbf3faeb15ba84e9be0fa001c0a | [
"Apache-2.0"
] | null | null | null |
# TorchBeast
This library is a fork of [torchbeast](https://github.com/facebookresearch/torchbeast)
and is distributed under [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Installation
Create virtualenv with [poetry](https://python-poetry.org) and execute the following command
in the virtualenv directory (that contains `.venv`):
```bash
$ curl -sSL https://github.com/izumiya-keisuke/torchbeast/raw/main/install.sh | sh
```
## Changes from the original
- Added the last action to the return value.
- [actorpool.cc](/libtorchbeast/actorpool.cc)
- Support PolyBeast installation.
- [pyproject.toml](/pyproject.toml)
- [install_grpc.sh](/scripts/install_grpc.sh)
- [setup.py](/setup.py)
- [install.sh](/install.sh)
- Fix readme.
---
Original README is shown below.
---
A PyTorch implementation of [IMPALA: Scalable Distributed
Deep-RL with Importance Weighted Actor-Learner Architectures
by Espeholt, Soyer, Munos et al.](https://arxiv.org/abs/1802.01561)
TorchBeast comes in two variants:
[MonoBeast](#getting-started-monobeast) and
[PolyBeast](#faster-version-polybeast). While
PolyBeast is more powerful (e.g. allowing training across machines),
it's somewhat harder to install. MonoBeast requires only Python and
PyTorch (we suggest using PyTorch version 1.2 or newer).
For further details, see our [paper](https://arxiv.org/abs/1910.03552).
## BibTeX
```
@article{torchbeast2019,
title={{TorchBeast: A PyTorch Platform for Distributed RL}},
author={Heinrich K\"{u}ttler and Nantas Nardelli and Thibaut Lavril and Marco Selvatici and Viswanath Sivakumar and Tim Rockt\"{a}schel and Edward Grefenstette},
year={2019},
journal={arXiv preprint arXiv:1910.03552},
url={https://github.com/facebookresearch/torchbeast},
}
```
## Getting started: MonoBeast
MonoBeast is a pure Python + PyTorch implementation of IMPALA.
To set it up, create a new conda environment and install MonoBeast's
requirements:
```bash
$ conda create -n torchbeast python=3.7
$ conda activate torchbeast
$ conda install pytorch -c pytorch
$ pip install -r requirements.txt
```
Then run MonoBeast, e.g. on the [Pong Atari
environment](https://gym.openai.com/envs/Pong-v0/):
```shell
$ python -m torchbeast.monobeast --env PongNoFrameskip-v4
```
By default, MonoBeast uses only a few actors (each with their instance
of the environment). Let's change the default settings (try this on a
beefy machine!):
```shell
$ python -m torchbeast.monobeast \
--env PongNoFrameskip-v4 \
--num_actors 45 \
--total_steps 30000000 \
--learning_rate 0.0004 \
--epsilon 0.01 \
--entropy_cost 0.01 \
--batch_size 4 \
--unroll_length 80 \
--num_buffers 60 \
--num_threads 4 \
--xpid example
```
Results are logged to `~/logs/torchbeast/latest` and a checkpoint file is
written to `~/logs/torchbeast/latest/model.tar`.
Once training finished, we can test performance on a few episodes:
```shell
$ python -m torchbeast.monobeast \
--env PongNoFrameskip-v4 \
--mode test \
--xpid example
```
MonoBeast is a simple, single-machine version of IMPALA.
Each actor runs in a separate process with its dedicated instance of
the environment and runs the PyTorch model on the CPU to create
actions. The resulting rollout trajectories
(environment-agent interactions) are sent to the learner. In the main
process, the learner consumes these rollouts and uses them to update
the model's weights.
## Faster version: PolyBeast
PolyBeast provides a faster and more scalable implementation of
IMPALA.
The easiest way to build and install all of PolyBeast's dependencies
and run it is to use Docker:
```shell
$ docker build -t torchbeast .
$ docker run --name torchbeast torchbeast
```
To run PolyBeast directly on Linux or MacOS, follow this guide.
### Installing PolyBeast
#### Linux
Create a new Conda environment, and install PolyBeast's requirements:
```shell
$ conda create -n torchbeast python=3.7
$ conda activate torchbeast
$ pip install -r requirements.txt
```
Install PyTorch either [from
source](https://github.com/pytorch/pytorch#from-source) or as per its
[website](https://pytorch.org/get-started/locally/) (select Conda and
Python 3.7).
PolyBeast also requires gRPC, which can be installed by running:
```shell
$ git submodule update --init --recursive
$ conda install -c anaconda protobuf --yes
$ ./scripts/install_grpc.sh
```
Finally, let's compile the C++ parts of PolyBeast:
```
$ pip install nest/
$ export LD_LIBRARY_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}/lib:${LD_LIBRARY_PATH}
$ python setup.py install
```
#### MacOS
Create a new Conda environment, and install PolyBeast's requirements:
```shell
$ conda create -n torchbeast python=3.7
$ conda activate torchbeast
$ pip install -r requirements.txt
```
On MacOS, we can use homebrew to install gRPC:
```shell
$ brew install grpc
```
PyTorch can be installed as per its
[website](https://pytorch.org/get-started/locally/) (select Conda and
Python 3.7).
Compile and install the C++ parts of PolyBeast:
```
$ pip install nest/
$ TORCHBEAST_LIBS_PREFIX=/usr/local CXX=c++ python setup.py install
$ python setup.py build develop # Sets up Python search paths.
```
### Running PolyBeast
To start both the environment servers and the learner process, run
```shell
$ python -m torchbeast.polybeast
```
The environment servers and the learner process can also be started separately:
```shell
python -m torchbeast.polybeast_env --num_servers 10
```
Start another terminal and run:
```shell
$ python3 -m torchbeast.polybeast_learner
```
## (Very rough) overview of the system
```
|-----------------| |-----------------| |-----------------|
| ACTOR 1 | | ACTOR 2 | | ACTOR n |
|-------| | |-------| | |-------| |
| | .......| | | .......| . . . | | .......|
| Env |<-.Model.| | Env |<-.Model.| | Env |<-.Model.|
| |->.......| | |->.......| | |->.......|
|-----------------| |-----------------| |-----------------|
^ I ^ I ^ I
| I | I | I Actors
| I rollout | I rollout weights| I send
| I | I /--------/ I rollouts
| I weights| I | I (frames,
| I | I | I actions
| I | v | I etc)
| L=======>|--------------------------------------|<===========J
| |......... LEARNER |
\--------------|..Model.. Consumes rollouts, updates |
Learner |......... model weights |
sends |--------------------------------------|
weights
```
The system has two main components, actors and a learner.
Actors generate rollouts (tensors from a number of steps of
environment-agent interactions, including environment frames, agent
actions and policy logits, and other data).
The learner consumes that experience, computes a loss and updates the
weights. The new weights are then propagated to the actors.
## Learning curves on Atari
We ran TorchBeast on Atari, using the same hyperparamaters and neural
network as in the [IMPALA
paper](https://arxiv.org/abs/1802.01561). For comparison, we also ran
the [open source TensorFlow implementation of
IMPALA](https://github.com/deepmind/scalable_agent), using the [same
environment
preprocessing](https://github.com/heiner/scalable_agent/releases/tag/gym). The
results are equivalent; see our paper for details.
![deep_network](./plot.png)
## Repository contents
`libtorchbeast`: C++ library that allows efficient learner-actor
communication via queueing and batching mechanisms. Some functions are
exported to Python using pybind11. For PolyBeast only.
`nest`: C++ library that allows to manipulate complex
nested structures. Some functions are exported to Python using
pybind11.
`tests`: Collection of python tests.
`third_party`: Collection of third-party dependencies as Git
submodules. Includes [gRPC](https://grpc.io/).
`torchbeast`: Contains `monobeast.py`, and `polybeast.py`,
`polybeast_learner.py` and `polybeast_env.py`.
## Hyperparamaters
Both MonoBeast and PolyBeast have flags and hyperparameters. To
describe a few of them:
* `num_actors`: The number of actors (and environment instances). The
optimal number of actors depends on the capabilities of the machine
(e.g. you would not have 100 actors on your laptop). In default
PolyBeast this should match the number of servers started.
* `batch_size`: Determines the size of the learner inputs.
* `unroll_length`: Length of a rollout (i.e., number of steps that an
actor has to be perform before sending its experience to the
learner). Note that every batch will have dimensions
`[unroll_length, batch_size, ...]`.
## Contributing
We would love to have you contribute to TorchBeast or use it for your
research. See the [CONTRIBUTING.md](CONTRIBUTING.md) file for how to help
out.
## License
TorchBeast is released under the Apache 2.0 license.
| 30.414791 | 163 | 0.654932 | eng_Latn | 0.948752 |
f48b9a3f47a4700222d361ef7c0693e23c9dfd2a | 8,988 | md | Markdown | articles/kinect-dk/hardware-specification.md | cedarkuo/azure-docs.zh-tw | 35578e41dc1bd28a8859b2dcc02f71c8c5b26f90 | [
"CC-BY-4.0",
"MIT"
] | 66 | 2017-08-24T10:28:13.000Z | 2022-03-04T14:01:29.000Z | articles/kinect-dk/hardware-specification.md | cedarkuo/azure-docs.zh-tw | 35578e41dc1bd28a8859b2dcc02f71c8c5b26f90 | [
"CC-BY-4.0",
"MIT"
] | 534 | 2017-06-30T19:57:07.000Z | 2022-03-11T08:12:44.000Z | articles/kinect-dk/hardware-specification.md | cedarkuo/azure-docs.zh-tw | 35578e41dc1bd28a8859b2dcc02f71c8c5b26f90 | [
"CC-BY-4.0",
"MIT"
] | 105 | 2017-07-04T11:37:54.000Z | 2022-03-20T06:10:38.000Z | ---
title: Azure Kinect DK 硬體規格
description: 了解 Azure Kinect DK 的元件、規格和功能。
author: tesych
ms.author: tesych
ms.reviewer: jarrettr
ms.prod: kinect-dk
ms.date: 02/14/2020
ms.topic: article
keywords: azure, kinect, 規格, 硬體, DK, 功能, 深度, 彩色, RGB, IMU, 麥克風, 陣列, 深度
ms.custom:
- CI 114092
- CSSTroubleshooting
audience: ITPro
manager: dcscontentpm
ms.localizationpriority: high
ms.openlocfilehash: e0d42a3ce1dd9deb5e73500371c367134ca852e1
ms.sourcegitcommit: 5a71ec1a28da2d6ede03b3128126e0531ce4387d
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 02/26/2020
ms.locfileid: "77619968"
---
# <a name="azure-kinect-dk-hardware-specifications"></a>Azure Kinect DK 硬體規格
本文提供 Azure Kinect 硬體如何將 Microsoft 的最新感應器技術整合到單一 USB 連接配件中的詳細資訊。
![Azure Kinect DK](./media/resources/hardware-specs-media/device-wire.png)
## <a name="terms"></a>詞彙
本文會使用下列縮寫詞彙。
- NFOV (窄視野深度模式)
- NFOV (寬視野深度模式)
- FOV (視野)
- FPS (每秒畫面格數)
- IMU (慣性測量單位)
- FoI (感興趣的領域)
## <a name="product-dimensions-and-weight"></a>產品尺寸和重量
Azure Kinect 裝置是由下列尺寸和重量維度所組成。
- **維度**:103 x 39 x 126 毫米
- **重量**:440 克
![Azure Kinect DK 維度](./media/resources/hardware-specs-media/dimensions.png)
## <a name="operating-environment"></a>作業環境
Azure Kinect DK 適用於在下列環境條件下操作的開發人員和商務企業:
- **溫度**:10-25⁰C
- **濕度**:8-90% (非凝固) 相對溼度
> [!NOTE]
> 在環境條件之外使用可能會導致裝置失敗和/或運作不正常。 這些環境條件適用於裝置在所有作業條件下的周遭環境。 搭配外部機殼使用時,建議使用主動溫度控制及 (或) 其他冷卻解決方案,以確保在這些範圍內維護裝置。 裝置設計的特色是正面區段與後端套筒之間有一個冷卻通道。 當您實作裝置時,請確定此冷卻通道不受到阻礙。
請參閱其他產品[安全性資訊](https://support.microsoft.com/help/4023454/safety-information)。
## <a name="depth-camera-supported-operating-modes"></a>深度相機支援的作業模式
Azure Kinect DK 整合了 Microsoft 設計的 1 百萬像素飛行時間 (ToF) 深度相機,其使用[在 ISSCC 2018 呈現的影像感應器](https://docs.microsoft.com/windows/mixed-reality/ISSCC-2018)。 深度相機支援以下所示的模式:
| [模式] | 解決方案 | FoI | FPS | 作業範圍* | 曝光時間 |
|-----------------|------------|-----------|--------------------|------------------|---------------|
| NFOV 未分類收納 | 640x576 | 75°x65° | 0、5、15、30 | 0.5 - 3.86 米 | 12.8 毫秒 |
| NFOV 2x2 分類收納 (SW) | 320x288 | 75°x65° | 0、5、15、30 | 0.5 - 5.46 米 | 12.8 毫秒 |
| WFOV 2x2 分類收納 | 512x512 | 120°x120° | 0、5、15、30 | 0.25 - 2.88 米 | 12.8 毫秒 |
| WFOV 未分類收納 | 1024x1024 | 120°x120° | 0、5、15 | 0.25 - 2.21 米 | 20.3 毫秒 |
| 被動式 IR | 1024x1024 | N/A | 0、5、15、30 | N/A | 1.6 毫秒 |
\*15% 到 95% 反射率:850nm 波長,2.2 μW/cm<sup>2</sup>/nm,隨機誤差的標準差 ≤ 17 毫米,典型系統誤差 < 11 毫米 + 0.1% 的差距,但沒有多路徑干擾。 可提供的深度可能會超出上方所示的作業範圍。 這取決於物體的反射率。
## <a name="color-camera-supported-operating-modes"></a>彩色相機支援的作業模式
Azure Kinect DK 包含 OV12A10 12MP CMOS 滾動式快門感應器。 原生作業模式如下所列:
| RGB 相機解析度 (HxV) | 外觀比例 | 格式選項 | 畫面播放速率 (FPS) | Nominal FOV (HxV)(經過後處理) |
|------------------------------------------|------------------------|---------------------------|-----------------------------|---------------------------------------------|
| 3840x2160 | 16:9 | MJPEG | 0、5、15、30 | 90°x59° |
| 2560x1440 | 16:9 | MJPEG | 0、5、15、30 | 90°x59° |
| 1920x1080 | 16:9 | MJPEG | 0、5、15、30 | 90°x59° |
| 1280x720 | 16:9 | MJPEG/YUY2/NV12 | 0、5、15、30 | 90°x59° |
| 4096x3072 | 4:3 | MJPEG | 0、5、15 | 90°x74.3° |
| 2048x1536 | 4:3 | MJPEG | 0、5、15、30 | 90°x74.3° |
RGB 相機與 USB Video class (UVC) 相容,而且可在沒有感應器 SDK 的情況下使用。 RGB 相機色彩空間:BT.601 全幅 [0..255]。
> [!NOTE]
> 感應器 SDK 可以提供 BGRA 像素格式的彩色影像。 這不是裝置支援的原生模式,使用時會造成額外的 CPU 負載。 主機 CPU 用於從接收自裝置的 MJPEG 影像進行轉換。
## <a name="rgb-camera-exposure-time-values"></a>RGB 相機曝光時間值
以下是可接受的 RGB 相機手動曝光值對應:
| exp| 2^exp | 50Hz |60Hz |
|----|-------|--------|--------|
| -11| 488| 500| 500 |
| -10| 977| 1250| 1250 |
| -9| 1953| 2500| 2500 |
| -8| 3906| 10000| 8330 |
| -7| 7813| 20000| 16670 |
| -6| 15625| 30000| 33330 |
| -5| 31250| 40000| 41670 |
| -4| 62500| 50000| 50000 |
| -3| 125000| 60000| 66670 |
| -2| 250000| 80000| 83330 |
| -1| 500000| 100000| 100000 |
| 0| 1000000| 120000| 116670 |
| 1| 2000000| 130000| 133330 |
## <a name="depth-sensor-raw-timing"></a>深度感應器原始計時
深度模式 | IR <br>脈衝 | 脈衝 <br>寬度 | 閒置 <br>期間| 閒置時間 | 曝光 <br> Time
-|-|-|-|-|-
NFOV 未分類收納 <br> NFOV 2xx 分類收納 <br> WFOV 2x2 分類收納 | 9 | 125 us | 8 | 1450 us | 12.8 毫秒
WFOV 未分類收納 | 9 | 125 us | 8 | 2390 us | 20.3 毫秒
## <a name="camera-field-of-view"></a>相機視野
下圖顯示深度和 RGB 相機視野,或感應器所「看到」的角度。 下圖顯示處於4:3 模式的 RGB 相機。
![相機 FOV](./media/resources/hardware-specs-media/camera-fov.png)
此圖示範從正面距離 2000 毫米所看到的相機視野。
![相機 FOV 正面](./media/resources/hardware-specs-media/fov-front.png)
> [!NOTE]
> 當深度處於 NFOV 模式時,相較於 16:9 解析度,RGB 相機在 4:3 中有更佳的像素重疊。
## <a name="motion-sensor-imu"></a>動作感應器 (IMU)
內嵌的慣性測量單位 (IMU) 為 LSM6DSMUS,同時包含加速計和陀螺儀。 加速計和陀螺儀會同時在 1.6 kHz 取樣。 系統會向 208 Hz 的主機回報這些範例。
## <a name="microphone-array"></a>麥克風陣列
Azure Kinect DK 內嵌高品質的七隻麥克風環形陣列,可視為標準 USB 音訊類別 2.0 裝置。 全部 7 個通道都可以存取。 效能規格如下:
- 敏感度:-22 dBFS (94 dB SPL,1 kHz)
- 訊噪比 > 65 dB
- 聲學過載點:116 dB
![麥克風音頭](./media/resources/hardware-specs-media/mic-bubble.png)
## <a name="usb"></a>USB
Azure Kinect DK 是 USB3 複合裝置,會向作業系統公開下列硬體端點:
廠商識別碼為 0x045E (Microsoft),產品識別碼表如下:
| USB 介面 | PNP IP | 注意 |
|-------------------------|--------------|----------------------|
| USB3.1 Gen1 Hub | 0x097A | 主要中樞 |
| USB2.0 Hub | 0x097B | HS USB |
| 深度相機 | 0x097C | USB3.0 |
| 彩色相機 | 0x097D | USB3.0 |
| 麥克風 | 0x097E | HS USB |
## <a name="indicators"></a>指標
裝置正面有相機串流指示器,可使用感應器 SDK 以程式設計方式加以停用。
裝置後側的狀態 LED 可指出裝置狀態:
| 當燈號為 | 則表示 |
|-----------------------|------------------------------------------------------------|
| 純白色 | 裝置已開啟且運作正常。 |
| 閃爍白色 | 裝置已開啟,但沒有 USB 3.0 資料連線。 |
| 閃爍琥珀色 | 裝置沒有足夠的電源可運作。 |
| 琥珀色閃爍白色 | 韌體正在進行更新或復原 |
## <a name="power-device"></a>電源裝置
您可透過兩種方式為裝置供電:
1. 使用內建電源供應器。 資料是經由個別的 USB Type-C 對 Type-A 纜線進行連線。
2. 同時對電源和資料使用 Type-C 對 Type-C 纜線。
Azure Kinect DK 未隨附 Type-C 對 Type-C 纜線。
> [!NOTE]
> - 內建電源供應器纜線是 USB Type-A 對單一圓柱式連接器。 使用所提供的插牆式電源供應器搭配此纜線。 裝置能夠汲取的電力超過兩個標準 USB Type-A 連接埠所提供的電力。
> - USB 纜線很重要,我們建議使用高品質的纜線並先驗證其功能,然後再從遠端部署該單元。
> [!TIP]
> 若要選取良好 Type-C 對 Type-C 纜線:
> - [USB 認證纜線](https://www.usb.org/products)必須同時支援電源和資料。
> - 被動式纜線的長度應該小於 1.5 米。 如果較長,請使用主動式纜線。
> - 纜線必須支援不大於 1.5A。 否則,您必須連接外部電源供應器。
確認纜線:
- 透過纜線將裝置連接到主機電腦。
- 驗證 Windows 裝置管理員中的所有裝置都正確列舉。 深度和 RGB 相機應會出現,如下列範例所示。
![裝置管理員中的 Azure Kinect DK](./media/resources/hardware-specs-media/device-manager.png)
- 使用下列設定,在 Azure Kinect 檢視器中驗證是否可在所有感應器上確實串接纜線:
- 深度相機:NFOV 未分類收納
- RGB 相機:2160p
- 已啟用麥克風和 IMU
## <a name="what-does-the-light-mean"></a>燈號的意義
電源指示器是 Azure Kinect DK 背面的 LED。 LED 的顏色會隨著裝置的狀態改變。
![此影像顯示 Azure Kinect DK 的背面。 有三個編號的圖說文字:一個用於 LED 指示器,其下方的兩個則用於纜線。](./media/quickstarts/azure-kinect-dk-power-indicator.png)
此圖標示了下列元件:
1. 電源指示器
1. 電源線 (連接到電源)
1. USB-C 資料傳輸線 (連接到電腦)
請確定已連接纜線,如下所示。 然後檢查下表,以了解電源燈所指示的各種狀態。
|當燈號為: |則表示: |您應該: |
| ---| --- | --- |
|純白色 |裝置已開啟且運作正常。 |使用裝置。 |
|未亮起 |裝置未連接到電腦。 |請確定圓形電源連接器纜線已連接到裝置和 USB 電源配接器。<br /><br />請確定 USB-C 纜線已連接到裝置和您的電腦。 |
|閃爍白色 |裝置已開啟電源,但沒有 USB 3.0 資料連線。 |請確定圓形電源連接器纜線已連接到裝置和 USB 電源配接器。<br /><br />請確定 USB-C 纜線已連接到裝置以及您電腦上的 USB 3.0 連接埠。<br /><br />將裝置連接到電腦的其他 USB 3.0 連接埠。<br /><br />在您的電腦上,開啟裝置管理員 ([開始] > [控制台] > [裝置管理員] ),並確認您的電腦具有支援的 USB 3.0 主機控制器。 |
|閃爍琥珀色 |裝置沒有足夠的電源可運作。 |請確定圓形電源連接器纜線已連接到裝置和 USB 電源配接器。<br /><br />請確定 USB-C 纜線已連接到裝置和您的電腦。 |
|琥珀色,然後閃爍白色 |裝置已開啟電源且正在接收韌體更新,或裝置正在恢復原廠設定。 |等待電源指示燈變成純白色。 如需詳細資訊,請參閱[重設 Azure Kinect DK](reset-azure-kinect-dk.md)。 |
## <a name="power-consumption"></a>耗電量
Azure Kinect DK 最多耗用 5.9 W;特定耗電量與使用案例相關。
## <a name="calibration"></a>校正
Azure Kinect DK 是在原廠校正。 您可透過感應器 SDK 以程式設計方式查詢視覺和慣性感應器的校正參數。
## <a name="device-recovery"></a>裝置復原
您可使用鎖緊銷底下的按鈕,將裝置韌體重設為原始韌體。
![Azure Kinect DK 復原按鈕](./media/resources/hardware-specs-media/recovery.png)
若要復原裝置,請參閱[這裡的指示](reset-azure-kinect-dk.md)。
## <a name="next-steps"></a>後續步驟
- [使用 Azure Kinect 感應器 SDK](about-sensor-sdk.md)
- [設定硬體](set-up-azure-kinect-dk.md)
| 35.952 | 237 | 0.54595 | yue_Hant | 0.906114 |
f48bb8faec8a9aff9ea8731eb6f16a31e7e8553c | 5,227 | md | Markdown | content/en/docs/CommunityLeaders/EventHandbooks/kube101/Ingress101/_index.md | ayushpattnaik/get-involved | a433c0b202c9428a02360e98439370a18887a2ab | [
"Apache-2.0"
] | 21 | 2021-02-08T13:00:28.000Z | 2022-01-30T09:05:40.000Z | content/en/docs/CommunityLeaders/EventHandbooks/kube101/Ingress101/_index.md | ayushpattnaik/get-involved | a433c0b202c9428a02360e98439370a18887a2ab | [
"Apache-2.0"
] | 43 | 2021-01-21T05:25:43.000Z | 2021-02-08T10:56:53.000Z | content/en/docs/CommunityLeaders/EventHandbooks/kube101/Ingress101/_index.md | ayushpattnaik/get-involved | a433c0b202c9428a02360e98439370a18887a2ab | [
"Apache-2.0"
] | 31 | 2021-02-09T02:52:23.000Z | 2021-11-23T15:53:42.000Z | ---
title: "Deploying Your First Ingress Deployment"
linkTitle: "Deploying Your First Ingress Deployment"
weight: 97
description: >-
Deploying your First Ingress Deployment
---
## What is an Ingress?
- In Kubernetes, an Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. You configure access by creating a collection of rules that define which inbound connections reach which services.
- This lets you consolidate your routing rules into a single resource. For example, you might want to send requests to example.com/api/v1/ to an api-v1 service, and requests to example.com/api/v2/ to the api-v2 service. With an Ingress, you can easily set this up without creating a bunch of LoadBalancers or exposing each service on the Node.
### Kubernetes Ingress vs LoadBalancer vs NodePort
These options all do the same thing. They let you expose a service to external network requests.
They let you send a request from outside the Kubernetes cluster to a service inside the cluster.
### NodePort
![](https://raw.githubusercontent.com/collabnix/kubelabs/master/Ingress101/nodeport.png)
- NodePort is a configuration setting you declare in a service’s YAML. Set the service spec’s type to NodePort. Then, Kubernetes will allocate a specific port on each Node to that service, and any request to your cluster on that port gets forwarded to the service.
- This is cool and easy, it’s just not super robust. You don’t know what port your service is going to be allocated, and the port might get re-allocated at some point.
### LoadBalancer
![](https://raw.githubusercontent.com/collabnix/kubelabs/master/Ingress101/loadbalancer.png)
- You can set a service to be of type LoadBalancer the same way you’d set NodePort— specify the type property in the service’s YAML. There needs to be some external load balancer functionality in the cluster, typically implemented by a cloud provider.
- This is typically heavily dependent on the cloud provider—GKE creates a Network Load Balancer with an IP address that you can use to access your service.
- Every time you want to expose a service to the outside world, you have to create a new LoadBalancer and get an IP address.
## Ingress
![](https://raw.githubusercontent.com/collabnix/kubelabs/master/Ingress101/ingress.png)
- NodePort and LoadBalancer let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.
- This makes it decoupled and isolated from the services you want to expose. It also helps you to consolidate routing rules into one place.
- The one downside is that you need to configure an Ingress Controller for your cluster. But that’s pretty easy—in this example, we’ll use the Nginx Ingress Controller.
## How to Use Nginx Ingress Controller
- Start by creating the “mandatory” resources for Nginx Ingress in your cluster.
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
```
- Then, enable the ingress add-on for Minikube
```
minikube addons enable ingress
```
- Or, if you’re using Docker for Mac to run Kubernetes instead of Minikube.
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
```
- Check that it’s all set up correctly.
```
kubectl get pods --all-namespaces -l app=ingress-nginx
```
### Creating a Kubernetes Ingress
- First, let’s create two services to demonstrate how the Ingress routes our request. We’ll run two web applications that output a slightly different response.
{{< codenew file="/k8s/ingress101/apple.yaml" >}}
{{< codenew file="/k8s/ingress101/banana.yaml" >}}
```
$ kubectl apply -f apple.yaml
$ kubectl apply -f banana.yaml
```
- Create the Ingress in the cluster
{{< codenew file="/k8s/ingress101/ingress.yaml" >}}
```
kubectl create -f ingress.yaml
```
Perfect! Let’s check that it’s working. If you’re using Minikube, you might need to replace localhost with minikube IP.
```
$ curl -kL http://localhost/apple
apple
$ curl -kL http://localhost/banana
banana
$ curl -kL http://localhost/notfound
default backend - 404
```
## Ingress Controllers and Ingress Resources
- Kubernetes supports a high level abstraction called Ingress, which allows simple host or URL based HTTP routing. An ingress is a core concept (in beta) of Kubernetes, but is always implemented by a third party proxy. These implementations are known as ingress controllers. An ingress controller is responsible for reading the Ingress Resource information and processing that data accordingly. Different ingress controllers have extended the specification in different ways to support additional use cases.
- Ingress is tightly integrated into Kubernetes, meaning that your existing workflows around kubectl will likely extend nicely to managing ingress. Note that an ingress controller typically doesn’t eliminate the need for an external load balancer — the ingress controller simply adds an additional layer of routing and control behind the load balancer.
| 46.669643 | 507 | 0.776162 | eng_Latn | 0.994833 |
f48c111e0b2634547b2757e6415be53e34ec0b1e | 1,773 | md | Markdown | _posts/2016-07-28-Jorge-Manuel-Forever-Short-Sleeves-Chapel-Train-MermaidTrumpet.md | novstylessee/novstylessee.github.io | 4c99fd7f6148fa475871b044a67df2ac0151b9ba | [
"MIT"
] | null | null | null | _posts/2016-07-28-Jorge-Manuel-Forever-Short-Sleeves-Chapel-Train-MermaidTrumpet.md | novstylessee/novstylessee.github.io | 4c99fd7f6148fa475871b044a67df2ac0151b9ba | [
"MIT"
] | null | null | null | _posts/2016-07-28-Jorge-Manuel-Forever-Short-Sleeves-Chapel-Train-MermaidTrumpet.md | novstylessee/novstylessee.github.io | 4c99fd7f6148fa475871b044a67df2ac0151b9ba | [
"MIT"
] | null | null | null | ---
layout: post
date: 2016-07-28
title: "Jorge Manuel Forever Short Sleeves Chapel Train Mermaid/Trumpet"
category: Jorge Manuel
tags: [Jorge Manuel ,Jorge Manuel,Mermaid/Trumpet,Illusion,Chapel Train,Short Sleeves]
---
### Jorge Manuel Forever
Just **$819.99**
### Short Sleeves Chapel Train Mermaid/Trumpet
<table><tr><td>BRANDS</td><td>Jorge Manuel</td></tr><tr><td>Silhouette</td><td>Mermaid/Trumpet</td></tr><tr><td>Neckline</td><td>Illusion</td></tr><tr><td>Hemline/Train</td><td>Chapel Train</td></tr><tr><td>Sleeve</td><td>Short Sleeves</td></tr></table>
<a href="https://www.readybrides.com/en/jorge-manuel-/35833-jorge-manuel-forever.html"><img src="//img.readybrides.com/75177/jorge-manuel-forever.jpg" alt="Jorge Manuel Forever" style="width:100%;" /></a>
<!-- break --><a href="https://www.readybrides.com/en/jorge-manuel-/35833-jorge-manuel-forever.html"><img src="//img.readybrides.com/75178/jorge-manuel-forever.jpg" alt="Jorge Manuel Forever" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/jorge-manuel-/35833-jorge-manuel-forever.html"><img src="//img.readybrides.com/75179/jorge-manuel-forever.jpg" alt="Jorge Manuel Forever" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/jorge-manuel-/35833-jorge-manuel-forever.html"><img src="//img.readybrides.com/75180/jorge-manuel-forever.jpg" alt="Jorge Manuel Forever" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/jorge-manuel-/35833-jorge-manuel-forever.html"><img src="//img.readybrides.com/75176/jorge-manuel-forever.jpg" alt="Jorge Manuel Forever" style="width:100%;" /></a>
Buy it: [https://www.readybrides.com/en/jorge-manuel-/35833-jorge-manuel-forever.html](https://www.readybrides.com/en/jorge-manuel-/35833-jorge-manuel-forever.html)
| 93.315789 | 253 | 0.729272 | yue_Hant | 0.202938 |
f48c55aa2b3efe3059788aa26f64e1d237d111e3 | 11,426 | md | Markdown | Pitches/Why Your Business Needs Error Monitoring Software/text.md | GabeStah/Airbrake.io | df324d9feda42dee41b4aede6000960ba99c28ea | [
"MIT"
] | 1 | 2019-11-26T20:40:56.000Z | 2019-11-26T20:40:56.000Z | Pitches/Why Your Business Needs Error Monitoring Software/text.md | GabeStah/Airbrake.io | df324d9feda42dee41b4aede6000960ba99c28ea | [
"MIT"
] | 18 | 2020-02-28T20:35:27.000Z | 2022-02-26T03:34:27.000Z | Pitches/Why Your Business Needs Error Monitoring Software/text.md | GabeStah/Airbrake.io | df324d9feda42dee41b4aede6000960ba99c28ea | [
"MIT"
] | 1 | 2019-12-26T19:33:30.000Z | 2019-12-26T19:33:30.000Z | # Why Your Business Needs Error Monitoring Software
Modern software applications are flexible, capable tools. These applications empower users with a wide range of abilities, from communicating with loved ones and coworkers, to gleaning knowledge from millions of crowdsourced articles, to diagnosing afflictions and potentially saving lives, and even to building the next great application, which might provide even more fantastic possibilities.
Yet the Facebooks, Wikipedias, and Watsons of the world haven't become the huge successes that they are without many bumps along the way. Developing software is difficult, but for most modern applications, releasing your project out into the wild for the public is only the first step. Monitoring and maintenance are critical components for most public-facing applications after release, to ensure the product can handle the load, is performing as expected, is financially viable, and, of course, doesn't collapse under the weight of unexpected errors.
While not particularly common, even within projects where the entire software development life cycle was smooth sailing up until production launch, nothing can sink the ship quicker than a slew of errors, which may grind the service itself to a halt, or in the worst case, crash the application entirely.
Enter the power of error monitoring software. Even during development, but particularly after release, error monitoring software can provide that life line your organization needs to ensure your software remains fully functional. Any unforeseen errors can quickly be identified, examined, and resolved, without the need for user-generated feedback or error reports.
To better examine why your business could benefit from error monitoring software, we'll explore the advantages by answering questions related to the `Six Ws`: `Who`, `What`, `When`, `Where`, `Why`, and `How`. By the end, you should have a better understanding of how your business may benefit by utilizing error monitoring software for your own projects.
## Who is receiving this error?
Error monitoring software is designed to immediately inform you and your team when an error occurs, whether through email, via code hooks, or integrations with other services. Best of all, since the error monitor can be easily hooked into your own application code, you'll receive detailed reporting information on the error or issue that occurred, including appropriate user data. This might include associated account information for the afflicted user, the user's location when the error occurred, device the user was using, operating system version, browser version, session data, and so forth. In short, error monitoring software can automatically provide you with detailed information about the individuals who are experiencing errors, so you can better serve your users.
## Who is assigned to resolving the error?
Modern monitoring software includes tools for tracking and resolving issues throughout the entire team. This ensures that not only will you be aware of new exceptions as they occur, but you can monitor which team members are working on fixes for which errors. By employing common prioritization techniques across the suite of errors you've received, your team can easily track issues and formulate the best plan to tackling them in order of importance.
## What is the exact error that occurred?
Error monitoring software allows you to see the exact nature of the error, in addition to the detailed metadata included with the exception. For example, when an error occurs using [Airbrake.io's error monitoring software](https://airbrake.io/), Airbrake is able to report the error type/object, error message, parameters, a detailed backtrace, the environment in which the error occurred, the client machine data, the called API method that generated the error, the file that caused the error, and much more.
Using an error monitoring service allows for real-world debugging to occur without the need for user-generated feedback or reports. When issues arise, the monitor will provide the same level of detail (and often more) than would be available to a developer who was performing step-by-step debugging on the source code.
## What programming languages and platforms are supported?
Integrating error monitoring software into your development life cycle is great and all, but it needs to be compatible with the language or platform that your team is working with. Thankfully, most error monitoring services are compatible with all the popular languages, and even a handful of less-popular choices as well. For example, [Airbrake](https://airbrake.io/languages) is currently compatible with **20** officially supported languages and platforms, with more always being added, including the big names like `.NET`, `Ruby`, `PHP`, `JavaScript`, `Node.js`, and all mobile platforms. In nearly every case, if your team is working with it, chances are the error monitoring software you choose is compatible.
## What service integrations can I use?
Most monitoring services are designed to slide right into your existing workflow, without the need for additional work or added headaches. To simplify this process, most error monitors are compatible with the most popular development services, like `Slack`, `JIRA`, `GitHub`, `BitBucket`, `Campfire`, `Trello`, and many more. Moreover, a good error monitoring service will provide access to webhooks and an integration API, allowing your team to create a simple integration that enhances your workflow.
## When did the error occur?
Error monitoring software allows you and your team to be informed immediately when errors occur. Yet, just as importantly, the software will also keep historical, timestamped records of all errors. Not only does this allow you to pinpoint exactly when a particular issue or type of error occurred, but a friendly interface aggregates similar errors based on criteria you specify, or even automatically. Thus, not only can you see when any single instance of an error occurred, but you can view charts and graphs with information about issues over time, providing you with a bird's-eye view of the progress, and how particular builds or patches may have improved or impacted the occurrence of that issue.
## When is error monitoring appropriate?
The short answer: throughout the entire software development life cycle. The longer answer is that it depends on the needs of your team and your project. At the very least, error monitoring services excel once a public beta or release is underway, as the core power of such a service is providing detailed error feedback from users in the wild, without the need for intervention on your part.
## Where in the code did this error originate?
Knowing _that_ an error occurred is useful, but knowing exactly _where_ the error occurred within your code is exactly the kind of detail necessary to quickly and accurately find the cause and get it resolved. Error monitoring software provides detailed reports that include not just the methods or functions that generated the error, but the detailed backtrace, with the exact lines of code that led to this issue. Furthermore, when integrated with `GitHub` or `GitLab`, any part of the backtrace can be clicked through to take you to that exact line in the source code.
## Why is error monitoring important?
Creating functional and effective software without an error monitoring service to assist you is certainly possible. However, particularly for projects with growing user bases, doing so requires a great deal of foresight and a massive amount of upfront development time. Not only must you heavily emphasize bug squashing throughout the life cycle, but you must also have components in place within your released code that can provide detailed error feedback from users in the wild, so when something (inevitably) goes wrong after production, your team can be alerted and quickly respond. Certainly not an impossible task, but this requires a great deal of additional work that can be handled, easily and inexpensively, by error monitoring software.
Put simply, error monitoring software is a safety net. It doesn't allow your team to forego all levels of quality assurance leading up to and after release, but instead, it provides some breathing room. With an error monitor in place, you'll know that any unforeseen bugs, which do slip through the cracks and pop up after launch, will be immediately found and sent your way, so your team can quickly respond and get fixes out ASAP.
## Why did the error occur?
In most cases, once software is launched to production, getting detailed error reports often requires asking users affected by the error to take time out of their day to dig through directories for a crash dump or error log, and email or post it to your support forums. Even in the case where your application can provide automated error dump reports to your team, without the need for user intervention, a team member then needs to either manually comb through those reports, or to develop a software component that can perform that task for you.
With error monitoring software, those problems are largely eliminated. To figure out why an error occurred, you simply open that particular report within the user-friendly interface, and scan through the plethora of information attached to that error instance. Rather than asking a user to provide a crash dump, you can simply look over the error, moments after it occurred and you were alerted, to see exactly what part of the source code caused the issue, and begin working toward a resolution.
## How many errors does our software application have?
Most error monitoring services provide detailed statistics, charts, and historical graphs about the errors within your software. Not only can you view aggregate numbers for the overall system, but you can drill down with filtered queries and searches, to get data about errors that may have occurred only in `production`, within the `last 1 month`, that contain the word `IO`. This makes it easy to evaluate trends, so you can quickly determine if a particular type of error is occurring less frequently, as a result of a recent patch aimed at fixing it.
## How do I get started with error monitoring software?
Integrating and beginning to use error monitoring software is very simple. In most cases, simply [sign up for a free trial account](https://airbrake.io/account/new), locate the programming language or platform that suits your needs, follow the simple instructions to integrate with your project, and test it out by intentionally creating an error. Most monitoring services allow you to be up and running in under five minutes.
From there, you can improve the workflow by integrating the error monitoring software with other third-party services your team uses, such as source control and issue tracking. Once complete, you can breath easier, knowing that all future errors will be automatically tracked and reported across your team, providing you with a great deal of additional quality control throughout development and well after launch to production.
---
__SOURCES__
- https://airbrake.io/
- https://sentry.io/welcome/
- https://rollbar.com/features/
- https://raygun.com/error-monitoring-software
- http://www.nytimes.com/2012/12/04/health/quest-to-eliminate-diagnostic-lapses.html | 148.38961 | 780 | 0.804306 | eng_Latn | 0.999894 |
f48c8d236e4e435474c61a9892bacf8cb3f01821 | 9,797 | md | Markdown | docs/ru/query_language/syntax.md | malkfilipp/ClickHouse | 79a206b092cd465731020f331bc41f6951dbe751 | [
"Apache-2.0"
] | 1 | 2019-09-06T04:13:26.000Z | 2019-09-06T04:13:26.000Z | docs/ru/query_language/syntax.md | malkfilipp/ClickHouse | 79a206b092cd465731020f331bc41f6951dbe751 | [
"Apache-2.0"
] | null | null | null | docs/ru/query_language/syntax.md | malkfilipp/ClickHouse | 79a206b092cd465731020f331bc41f6951dbe751 | [
"Apache-2.0"
] | 1 | 2020-05-22T18:44:44.000Z | 2020-05-22T18:44:44.000Z | # Синтаксис
В системе есть два вида парсеров: полноценный парсер SQL (recursive descent parser) и парсер форматов данных (быстрый потоковый парсер).
Во всех случаях кроме запроса INSERT, используется только полноценный парсер SQL.
В запросе INSERT используется оба парсера:
```sql
INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def')
```
Фрагмент `INSERT INTO t VALUES` парсится полноценным парсером, а данные `(1, 'Hello, world'), (2, 'abc'), (3, 'def')` - быстрым потоковым парсером.
Данные могут иметь любой формат. При получении запроса, сервер заранее считывает в оперативку не более `max_query_size` байт запроса (по умолчанию, 1МБ), а всё остальное обрабатывается потоково.
Таким образом, в системе нет проблем с большими INSERT запросами, как в MySQL.
При использовании формата Values в INSERT запросе может сложиться иллюзия, что данные парсятся также, как выражения в запросе SELECT, но это не так. Формат Values гораздо более ограничен.
Далее пойдёт речь о полноценном парсере. О парсерах форматов, смотри раздел "Форматы".
## Пробелы
Между синтаксическими конструкциями (в том числе, в начале и конце запроса) может быть расположено произвольное количество пробельных символов. К пробельным символам относятся пробел, таб, перевод строки, CR, form feed.
## Комментарии
Поддерживаются комментарии в SQL-стиле и C-стиле.
Комментарии в SQL-стиле: от `--` до конца строки. Пробел после `--` может не ставиться.
Комментарии в C-стиле: от `/*` до `*/`. Такие комментарии могут быть многострочными. Пробелы тоже не обязательны.
## Ключевые слова {#syntax-keywords}
Ключевые слова (например, `SELECT`) регистронезависимы. Всё остальное (имена столбцов, функций и т. п.), в отличие от стандарта SQL, регистрозависимо.
Ключевые слова не зарезервированы (а всего лишь парсятся как ключевые слова в соответствующем контексте). Если вы используете [идентификаторы](#syntax-identifiers), совпадающие с ключевыми словами, заключите их в кавычки. Например, запрос `SELECT "FROM" FROM table_name` валиден, если таблица `table_name` имеет столбец с именем `"FROM"`.
## Идентификаторы {#syntax-identifiers}
Идентификаторы:
- Имена кластеров, баз данных, таблиц, разделов и столбцов;
- Функции;
- Типы данных;
- [Синонимы выражений](#syntax-expression_aliases).
Некоторые идентификаторы нужно указывать в кавычках (например, идентификаторы с пробелами). Прочие идентификаторы можно указывать без кавычек. Рекомендуется использовать идентификаторы, не требующие кавычек.
Идентификаторы не требующие кавычек соответствуют регулярному выражению `^[a-zA-Z_][0-9a-zA-Z_]*$` и не могут совпадать с [ключевыми словами](#syntax-keywords). Примеры: `x, _1, X_y__Z123_.`
Если вы хотите использовать идентификаторы, совпадающие с ключевыми словами, или использовать в идентификаторах символы, не входящие в регулярное выражение, заключите их в двойные или обратные кавычки, например, `"id"`, `` `id` ``.
## Литералы
Существуют: числовые, строковые, составные литералы и `NULL`.
### Числовые
Числовой литерал пытается распарситься:
- Сначала как знаковое 64-разрядное число, функцией [strtoull](https://en.cppreference.com/w/cpp/string/byte/strtoul).
- Если не получилось, то как беззнаковое 64-разрядное число, функцией [strtoll](https://en.cppreference.com/w/cpp/string/byte/strtol).
- Если не получилось, то как число с плавающей запятой, функцией [strtod](https://en.cppreference.com/w/cpp/string/byte/strtof).
- Иначе — ошибка.
Соответствующее значение будет иметь тип минимального размера, который вмещает значение.
Например, 1 парсится как `UInt8`, а 256 как `UInt16`. Подробнее о типах данных читайте в разделе [Типы данных](../data_types/index.md).
Примеры: `1`, `18446744073709551615`, `0xDEADBEEF`, `01`, `0.1`, `1e100`, `-1e-100`, `inf`, `nan`.
### Строковые {#syntax-string-literal}
Поддерживаются только строковые литералы в одинарных кавычках. Символы внутри могут быть экранированы с помощью обратного слеша. Следующие escape-последовательности имеют соответствующее специальное значение: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\a`, `\v`, `\xHH`. Во всех остальных случаях, последовательности вида `\c`, где `c` — любой символ, преобразуется в `c` . Таким образом, могут быть использованы последовательности `\'` и `\\`. Значение будет иметь тип [String](../data_types/string.md).
Минимальный набор символов, которых вам необходимо экранировать в строковых литералах: `'` и `\`. Одинарная кавычка может быть экранирована одинарной кавычкой, литералы `'It\'s'` и `'It''s'` эквивалентны.
### Составные
Поддерживаются конструкции для массивов: `[1, 2, 3]` и кортежей: `(1, 'Hello, world!', 2)`.
На самом деле, это вовсе не литералы, а выражение с оператором создания массива и оператором создания кортежа, соответственно.
Массив должен состоять хотя бы из одного элемента, а кортеж - хотя бы из двух.
Кортежи носят служебное значение для использования в секции `IN` запроса `SELECT`. Кортежи могут быть получены как результат запроса, но они не могут быть сохранены в базе данных (за исключением таблицы [Memory](../operations/table_engines/memory.md).)
### NULL {#null-literal}
Обозначает, что значение отсутствует.
Чтобы в поле таблицы можно было хранить `NULL`, оно должно быть типа [Nullable](../data_types/nullable.md).
В зависимости от формата данных (входных или выходных) `NULL` может иметь различное представление. Подробнее смотрите в документации для [форматов данных](../interfaces/formats.md#formats).
При обработке `NULL` есть множество особенностей. Например, если хотя бы один из аргументов операции сравнения — `NULL`, то результатом такой операции тоже будет `NULL`. Этим же свойством обладают операции умножения, сложения и пр. Подробнее читайте в документации на каждую операцию.
В запросах можно проверить `NULL` с помощью операторов [IS NULL](operators.md#operator-is-null) и [IS NOT NULL](operators.md), а также соответствующих функций `isNull` и `isNotNull`.
## Функции
Функции записываются как идентификатор со списком аргументов (возможно, пустым) в скобках. В отличие от стандартного SQL, даже в случае пустого списка аргументов, скобки обязательны. Пример: `now()`.
Бывают обычные и агрегатные функции (смотрите раздел "Агрегатные функции"). Некоторые агрегатные функции могут содержать два списка аргументов в круглых скобках. Пример: `quantile(0.9)(x)`. Такие агрегатные функции называются "параметрическими", а первый список аргументов называется "параметрами". Синтаксис агрегатных функций без параметров ничем не отличается от обычных функций.
## Операторы
Операторы преобразуются в соответствующие им функции во время парсинга запроса, с учётом их приоритета и ассоциативности.
Например, выражение `1 + 2 * 3 + 4` преобразуется в `plus(plus(1, multiply(2, 3)), 4)`.
## Типы данных и движки таблиц
Типы данных и движки таблиц в запросе `CREATE` записываются также, как идентификаторы или также как функции. То есть, могут содержать или не содержать список аргументов в круглых скобках. Подробнее смотрите разделы "Типы данных", "Движки таблиц", "CREATE".
## Синонимы выражений {#syntax-expression_aliases}
Синоним — это пользовательское имя выражения в запросе.
```
expr AS alias
```
- `AS` — ключевое слово для определения синонимов. Можно определить синоним для имени таблицы или столбца в секции `SELECT` без использования ключевого слова `AS` .
Например, `SELECT table_name_alias.column_name FROM table_name table_name_alias`.
В функции [CAST](functions/type_conversion_functions.md#type_conversion_function-cast), ключевое слово `AS` имеет другое значение. Смотрите описание функции.
- `expr` — любое выражение, которое поддерживает ClickHouse.
Например, `SELECT column_name * 2 AS double FROM some_table`.
- `alias` — имя для `выражения`. Синонимы должны соответствовать синтаксису [идентификаторов](#syntax-identifiers).
Например, `SELECT "table t".column_name FROM table_name AS "table t"`.
### Примечания по использованию
Синонимы являются глобальными для запроса или подзапроса, и вы можете определить синоним в любой части запроса для любого выражения. Например, `SELECT (1 AS n) + 2, n`.
Синонимы не передаются в подзапросы и между подзапросами. Например, при выполнении запроса `SELECT (SELECT sum(b.a) + num FROM b) - a.a AS num FROM a` ClickHouse сгенерирует исключение `Unknown identifier: num`.
Если синоним определен для результирующих столбцов в секции `SELECT` вложенного запроса, то эти столбцы отображаются во внешнем запросе. Например, `SELECT n + m FROM (SELECT 1 AS n, 2 AS m)`.
Будьте осторожны с синонимами, совпадающими с именами столбцов или таблиц. Рассмотрим следующий пример:
```
CREATE TABLE t
(
a Int,
b Int
)
ENGINE = TinyLog()
```
```
SELECT
argMax(a, b),
sum(b) AS b
FROM t
Received exception from server (version 18.14.17):
Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Aggregate function sum(b) is found inside another aggregate function in query.
```
В этом примере мы объявили таблицу `t` со столбцом `b`. Затем, при выборе данных, мы определили синоним `sum(b) AS b`. Поскольку синонимы глобальные, то ClickHouse заменил литерал `b` в выражении `argMax(a, b)` выражением `sum(b)`. Эта замена вызвала исключение.
## Звёздочка
В запросе `SELECT`, вместо выражения может стоять звёздочка. Подробнее смотрите раздел "SELECT".
## Выражения {#syntax-expressions}
Выражение представляет собой функцию, идентификатор, литерал, применение оператора, выражение в скобках, подзапрос, звёздочку. А также может содержать синоним.
Список выражений - одно выражение или несколько выражений через запятую.
Функции и операторы, в свою очередь, в качестве аргументов, могут иметь произвольные выражения.
[Оригинальная статья](https://clickhouse.yandex/docs/ru/query_language/syntax/) <!--hide-->
| 57.292398 | 501 | 0.771257 | rus_Cyrl | 0.973012 |
f48ce06374fe65ff75971a46ef5f10daec2fd22f | 667 | md | Markdown | docs/v1/ApiKey.md | technicalpickles/datadog-api-client-ruby | 019b898a554f9cde6ffefc912784f62c0a8cc8d6 | [
"Apache-2.0"
] | null | null | null | docs/v1/ApiKey.md | technicalpickles/datadog-api-client-ruby | 019b898a554f9cde6ffefc912784f62c0a8cc8d6 | [
"Apache-2.0"
] | 1 | 2021-01-27T05:06:34.000Z | 2021-01-27T05:12:09.000Z | docs/v1/ApiKey.md | ConnectionMaster/datadog-api-client-ruby | dd9451bb7508c1c1211556c995a04215322ffada | [
"Apache-2.0"
] | null | null | null | # DatadogAPIClient::V1::ApiKey
## Properties
| Name | Type | Description | Notes |
| ---- | ---- | ----------- | ----- |
| **created** | **String** | Date of creation of the API key. | [optional][readonly] |
| **created_by** | **String** | Datadog user handle that created the API key. | [optional][readonly] |
| **key** | **String** | API key. | [optional][readonly] |
| **name** | **String** | Name of your API key. | [optional] |
## Example
```ruby
require 'datadog_api_client/v1'
instance = DatadogAPIClient::V1::ApiKey.new(
created: 2019-08-02 15:31:07,
created_by: [email protected],
key: 1234512345123456abcabc912349abcd,
name: example user
)
```
| 26.68 | 102 | 0.62069 | eng_Latn | 0.297687 |
f48d5d8df40ec24cab8720602f1d66f0a6727cbf | 9,998 | md | Markdown | packages/rest-crud/CHANGELOG.md | ferrantejake/loopback-next | 90d157f2762f8c61199ade35014bebc45e7a3d2f | [
"MIT"
] | null | null | null | packages/rest-crud/CHANGELOG.md | ferrantejake/loopback-next | 90d157f2762f8c61199ade35014bebc45e7a3d2f | [
"MIT"
] | null | null | null | packages/rest-crud/CHANGELOG.md | ferrantejake/loopback-next | 90d157f2762f8c61199ade35014bebc45e7a3d2f | [
"MIT"
] | null | null | null | # Change Log
All notable changes to this project will be documented in this file.
See [Conventional Commits](https://conventionalcommits.org) for commit guidelines.
## [0.8.19](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-12-07)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.18](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-11-18)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.17](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-11-05)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.16](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-10-07)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.15](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-09-17)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.14](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-09-15)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.13](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-08-27)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.12](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-08-19)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.11](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-08-05)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.10](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-07-20)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.9](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-06-30)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.8](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-06-23)
### Bug Fixes
* set node version to >=10.16 to support events.once ([e39da1c](https://github.com/strongloop/loopback-next/commit/e39da1ca47728eafaf83c10ce35b09b03b6a4edc))
## [0.8.7](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-06-11)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.6](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-05-28)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.5](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-05-20)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.4](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-05-19)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.3](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-05-07)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.2](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-04-29)
**Note:** Version bump only for package @loopback/rest-crud
## [0.8.1](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-04-23)
**Note:** Version bump only for package @loopback/rest-crud
# [0.8.0](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-04-22)
### Features
* update package.json and .travis.yml for builds ([cb2b8e6](https://github.com/strongloop/loopback-next/commit/cb2b8e6a18616dda7783c0193091039d4e608131))
## [0.7.4](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-04-11)
**Note:** Version bump only for package @loopback/rest-crud
## [0.7.3](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-04-08)
**Note:** Version bump only for package @loopback/rest-crud
## [0.7.2](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-03-24)
**Note:** Version bump only for package @loopback/rest-crud
## [0.7.1](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-03-17)
**Note:** Version bump only for package @loopback/rest-crud
# [0.7.0](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-03-05)
### chore
* remove support for Node.js v8.x ([4281d9d](https://github.com/strongloop/loopback-next/commit/4281d9df50f0715d32879e1442a90b643ec8f542))
### Code Refactoring
* **rest:** make getApiSpec() async ([fe3df1b](https://github.com/strongloop/loopback-next/commit/fe3df1b85904ee8b8a005fa6eddf150d28ad2a08))
### Features
* add `tslib` as dependency ([a6e0b4c](https://github.com/strongloop/loopback-next/commit/a6e0b4ce7b862764167cefedee14c1115b25e0a4)), closes [#4676](https://github.com/strongloop/loopback-next/issues/4676)
* use [@param](https://github.com/param).filter and [@param](https://github.com/param).where decorators ([896ef74](https://github.com/strongloop/loopback-next/commit/896ef7485376b3aedcca01a40f828bf1ed9470ae))
* **rest-crud:** add CrudRestApiBuilder ([bc5d56f](https://github.com/strongloop/loopback-next/commit/bc5d56fd4f10759756cd0ef6fbc922c02b5a9894))
### BREAKING CHANGES
* **rest:** Api specifications are now emitted as a Promise instead
of a value object. Calls to getApiSpec function must switch from
the old style to new style as follows:
1. Old style
```ts
function() {
// ...
const spec = restApp.restServer.getApiSpec();
// ...
}
```
2. New style
```ts
async function() {
// ...
const spec = await restApp.restServer.getApiSpec();
// ...
}
```
* Node.js v8.x is now end of life. Please upgrade to version
10 and above. See https://nodejs.org/en/about/releases.
## [0.6.6](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-02-06)
**Note:** Version bump only for package @loopback/rest-crud
## [0.6.5](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-02-05)
**Note:** Version bump only for package @loopback/rest-crud
## [0.6.4](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-01-27)
**Note:** Version bump only for package @loopback/rest-crud
## [0.6.3](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2020-01-07)
**Note:** Version bump only for package @loopback/rest-crud
## [0.6.2](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-12-09)
**Note:** Version bump only for package @loopback/rest-crud
## [0.6.1](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-11-25)
**Note:** Version bump only for package @loopback/rest-crud
# [0.6.0](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-11-12)
### Features
* add defineCrudRepositoryClass ([8e3e21d](https://github.com/strongloop/loopback-next/commit/8e3e21d41c7df7a52e9420da09d09881c97cb771))
# [0.5.0](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-10-24)
### Features
* simplify model schema with excluded properties ([b554ac8](https://github.com/strongloop/loopback-next/commit/b554ac8a08a518f112d111ebabcac48279ada7f8))
# [0.4.0](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-10-07)
### Features
* generate controller based on Model name ([04a3318](https://github.com/strongloop/loopback-next/commit/04a3318))
## [0.3.2](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-09-28)
**Note:** Version bump only for package @loopback/rest-crud
## [0.3.1](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-09-27)
**Note:** Version bump only for package @loopback/rest-crud
# [0.3.0](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-09-17)
### Features
* use descriptive title to describe schema of POST (create) request bodies ([8f49a45](https://github.com/strongloop/loopback-next/commit/8f49a45))
# [0.2.0](https://github.com/strongloop/loopback-next/compare/@loopback/[email protected]...@loopback/[email protected]) (2019-09-06)
### Features
* **rest-crud:** add "replaceById" endpoint ([06d0967](https://github.com/strongloop/loopback-next/commit/06d0967))
# 0.1.0 (2019-09-03)
### Bug Fixes
* **rest-crud:** fix pkg name in license headers ([6ad0bb5](https://github.com/strongloop/loopback-next/commit/6ad0bb5))
### Features
* **rest-crud:** initial implementation ([4374160](https://github.com/strongloop/loopback-next/commit/4374160))
| 25.901554 | 208 | 0.711842 | eng_Latn | 0.177287 |
f48e0443b91182612802a4e9f65d7973ab1a9465 | 19,809 | md | Markdown | content/en/continuous_integration/setup_pipelines/jenkins.md | pgguru/documentation | 5f4abed98b242c3734f31eb132c2b4ccb5674c5e | [
"BSD-3-Clause"
] | null | null | null | content/en/continuous_integration/setup_pipelines/jenkins.md | pgguru/documentation | 5f4abed98b242c3734f31eb132c2b4ccb5674c5e | [
"BSD-3-Clause"
] | null | null | null | content/en/continuous_integration/setup_pipelines/jenkins.md | pgguru/documentation | 5f4abed98b242c3734f31eb132c2b4ccb5674c5e | [
"BSD-3-Clause"
] | null | null | null | ---
title: Set up Tracing on a Jenkins Pipeline
kind: documentation
further_reading:
- link: "/continuous_integration/explore_pipelines"
tag: "Documentation"
text: "Explore Pipeline Execution Results and Performance"
- link: "/continuous_integration/setup_pipelines/custom_commands/"
tag: "Documentation"
text: "Extend Pipeline Visibility by tracing individual commands"
- link: "/continuous_integration/troubleshooting/"
tag: "Documentation"
text: "Troubleshooting CI"
---
{{< site-region region="gov" >}}
<div class="alert alert-warning">CI Visibility is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.</div>
{{< /site-region >}}
## Compatibility
Supported Jenkins versions:
* Jenkins >= 2.164.1
## Prerequisite
Install the [Datadog Agent][1] on the Jenkins controller instance.
If the Jenkins controller and the Datadog Agent have been deployed to a Kubernetes cluster, Datadog recommends using the [Admission Controller][2], which automatically sets the `DD_AGENT_HOST` environment variable in the Jenkins controller pod to communicate with the local Datadog Agent.
## Install the Datadog Jenkins plugin
Install and enable the [Datadog Jenkins plugin][3] v3.3.0 or newer:
1. In your Jenkins instance web interface, go to **Manage Jenkins > Manage Plugins**.
2. In the [Update Center][4] on the **Available** tab, search for `Datadog Plugin`.
3. Select the checkbox next to the plugin, and install using one of the two install buttons at the bottom of the screen.
4. To verify that the plugin is installed, search for `Datadog Plugin` on the **Installed** tab.
## Enabling CI Visibility on the plugin
{{< tabs >}}
{{% tab "Using UI" %}}
1. In your Jenkins instance web interface, go to **Manage Jenkins > Configure System**.
2. Go to the `Datadog Plugin` section, scrolling down the configuration screen.
3. Select the `Datadog Agent` mode.
4. Configure the `Agent` host.
5. Configure the `Traces Collection` port (default `8126`).
6. Click on `Enable CI Visibility` checkbox to activate it.
7. (Optional) Configure your CI Instance name.
8. Check the connectivity with the Datadog Agent.
9. Save your configuration.
{{< img src="ci/ci-jenkins-plugin-config.png" alt="Datadog Plugin configuration for Jenkins" style="width:100%;">}}
{{% /tab %}}
{{% tab "Using configuration-as-code" %}}
If your Jenkins instance uses the Jenkins [`configuration-as-code`][1] plugin:
1. Create or modify the configuration YAML by adding an entry for `datadogGlobalConfiguration`:
```yaml
unclassified:
datadogGlobalConfiguration:
# Select the `Datadog Agent` mode.
reportWith: "DSD"
# Configure the `Agent` host
targetHost: "agent-host"
# Configure the `Traces Collection` port (default `8126`).
targetTraceCollectionPort: 8126
# Enable CI Visibility flag
enableCiVisibility: true
# (Optional) Configure your CI Instance name
ciInstanceName: "jenkins"
```
2. In your Jenkins instance web interface, go to **Manage Jenkins > Configuration as Code**.
3. Apply or reload the configuration.
4. Check the configuration using the `View Configuration` button.
[1]: https://github.com/jenkinsci/configuration-as-code-plugin/blob/master/README.md
{{% /tab %}}
{{% tab "Using Groovy" %}}
1. In your Jenkins instance web interface, go to **Manage Jenkins > Script Console**.
2. Run the configuration script:
```groovy
import jenkins.model.*
import org.datadog.jenkins.plugins.datadog.DatadogGlobalConfiguration
def j = Jenkins.getInstance()
def d = j.getDescriptor("org.datadog.jenkins.plugins.datadog.DatadogGlobalConfiguration")
// Select the Datadog Agent mode
d.setReportWith('DSD')
// Configure the Agent host.
d.setTargetHost('<agent host>')
// Configure the Traces Collection port (default 8126)
d.setTargetTraceCollectionPort(8126)
// Enable CI Visibility
d.setEnableCiVisibility(true)
// (Optional) Configure your CI Instance name
d.setCiInstanceName("jenkins")
// Save config
d.save()
```
{{% /tab %}}
{{% tab "Using Environment Variables" %}}
1. Set the following environment variables on your Jenkins instance machine:
```bash
# Select the Datadog Agent mode
DATADOG_JENKINS_PLUGIN_REPORT_WITH=DSD
# Configure the Agent host
DATADOG_JENKINS_PLUGIN_TARGET_HOST=agent-host
# Configure the Traces Collection port (default 8126)
DATADOG_JENKINS_PLUGIN_TARGET_TRACE_COLLECTION_PORT=8126
# Enable CI Visibility
DATADOG_JENKINS_PLUGIN_ENABLE_CI_VISIBILITY=true
# (Optional) Configure your CI Instance name
DATADOG_JENKINS_PLUGIN_CI_VISIBILITY_CI_INSTANCE_NAME=jenkins
```
2. Restart your Jenkins instance.
{{% /tab %}}
{{< /tabs >}}
To verify that CI Visibility is enabled, go to `Jenkins Log` and search for:
{{< code-block lang="text" >}}
Re/Initialize Datadog-Plugin Agent Http Client
TRACE -> http://<HOST>:<TRACE_PORT>/v0.3/traces
{{< /code-block >}}
## Enable job log collection
This is an optional step that enables the collection of job logs. It involves two steps: enabling the job collection port on the Datadog Agent, and enabling job collection on the Datadog Plugin.
### Datadog Agent
First, enable job log collection on the Datadog Agent by opening a TCP port to collect logs:
1. Add `logs_enabled: true` to your Agent configuration file `datadog.yaml`, or set the `DD_LOGS_ENABLED=true` environment variable.
2. Create a file at `/etc/datadog-agent/conf.d/jenkins.d/conf.yaml` on Linux with the contents below. Make sure that `service` matches the CI instance name provided earlier. For other operating systems, see [Agent configuration directory][5] guide.
{{< code-block lang="yaml" >}}
logs:
- type: tcp
port: 10518
service: my-jenkins-instance
source: jenkins
{{< /code-block >}}
3. [Restart the Agent][6] for the changes to take effect.
With this setup, the Agent listens in port `10518` for logs.
### Datadog Plugin
Second, enable job log collection on the Datadog Plugin:
{{< tabs >}}
{{% tab "Using UI" %}}
1. In the web interface of your Jenkins instance, go to **Manage Jenkins > Configure System**.
2. Go to the `Datadog Plugin` section, scrolling down the configuration screen.
3. Select the `Datadog Agent` mode.
4. Configure the `Agent` host, if not previously configured.
5. Configure the `Log Collection` port, as configured in the previous step.
6. Click on `Enable Log Collection` checkbox to activate it.
7. Check the connectivity with the Datadog Agent.
8. Save your configuration.
{{% /tab %}}
{{% tab "Using configuration-as-code" %}}
If your Jenkins instance uses the Jenkins [`configuration-as-code`][1] plugin:
1. Create or modify the configuration YAML for the entry `datadogGlobalConfiguration`:
```yaml
unclassified:
datadogGlobalConfiguration:
# Select the `Datadog Agent` mode.
reportWith: "DSD"
# Configure the `Agent` host
targetHost: "agent-host"
# Configure the `Log Collection` port, as configured in the previous step.
targetLogCollectionPort: 10518
# Enable Log collection
collectBuildLogs: true
```
2. In your Jenkins instance web interface, go to **Manage Jenkins > Configuration as Code**.
3. Apply or reload the configuration.
4. Check the configuration using the `View Configuration` button.
[1]: https://github.com/jenkinsci/configuration-as-code-plugin/blob/master/README.md
{{% /tab %}}
{{% tab "Using Groovy" %}}
1. In your Jenkins instance web interface, go to **Manage Jenkins > Script Console**.
2. Run the configuration script:
```groovy
import jenkins.model.*
import org.datadog.jenkins.plugins.datadog.DatadogGlobalConfiguration
def j = Jenkins.getInstance()
def d = j.getDescriptor("org.datadog.jenkins.plugins.datadog.DatadogGlobalConfiguration")
// Select the Datadog Agent mode
d.setReportWith('DSD')
// Configure the Agent host, if not previously configured.
d.setTargetHost('<agent host>')
// Configure the Log Collection port, as configured in the previous step.
d.setTargetLogCollectionPort(10518)
// Enable log collection
d.setCollectBuildLogs(true)
// Save config
d.save()
```
{{% /tab %}}
{{% tab "Using Environment Variables" %}}
1. Set the following environment variables on your Jenkins instance machine:
```bash
# Select the Datadog Agent mode
DATADOG_JENKINS_PLUGIN_REPORT_WITH=DSD
# Configure the Agent host
DATADOG_JENKINS_PLUGIN_TARGET_HOST=agent-host
# Configure the Log Collection port, as configured in the previous step.
DATADOG_JENKINS_PLUGIN_TARGET_LOG_COLLECTION_PORT=10518
# Enable log collection
DATADOG_JENKINS_PLUGIN_COLLECT_BUILD_LOGS=true
```
2. Restart your Jenkins instance.
{{% /tab %}}
{{< /tabs >}}
## Set the default branch name
To report pipeline results, attach the default branch name (for example, `main`) to pipeline spans in an attribute called `git.default_branch`. This is usually done automatically, but in some cases the plugin cannot extract this information because it might not be provided by Jenkins.
If this happens, set the default branch manually using the `DD_GIT_DEFAULT_BRANCH` environment variable in your build. For example:
{{< code-block lang="groovy" >}}
pipeline {
agent any
environment {
DD_GIT_DEFAULT_BRANCH = 'main'
...
}
stages {
...
}
}
{{< /code-block >}}
## Set Git information manually
The Jenkins plugin uses environment variables to determine the Git information. However, these environment variables are not always set
automatically due to dependencies on the Git plugin that is being used in the pipeline.
If the Git information is not detected automatically, you can set the following environment variables manually.
**Note:** These variables are optional, but if they are set, they take precedence over the Git information set by other Jenkins plugins.
`DD_GIT_REPOSITORY` (Optional)
: The repository URL of your service.<br/>
**Example**: `https://github.com/my-org/my-repo.git`
`DD_GIT_BRANCH` (Optional)
: The branch name.<br/>
**Example**: `main`
`DD_GIT_TAG` (Optional)
: The tag of the commit (if any).<br/>
**Example**: `0.1.0`
`DD_GIT_COMMIT_SHA` (Optional)
: The commit expressed in the hex 40 chars length form.<br/>
**Example**: `faaca5c59512cdfba9402c6e67d81b4f5701d43c`
`DD_GIT_COMMIT_MESSAGE` (Optional)
: The commit message.<br/>
**Example**: `Initial commit message`
`DD_GIT_COMMIT_AUTHOR_NAME` (Optional)
: The name of the author of the commit.<br/>
**Example**: `John Smith`
`DD_GIT_COMMIT_AUTHOR_EMAIL` (Optional)
: The email of the author of the commit.<br/>
**Example**: `[email protected]`
`DD_GIT_COMMIT_AUTHOR_DATE` (Optional)
: The date when the author submitted the commit expressed in ISO 8601 format.<br/>
**Example**: `2021-08-16T15:41:45.000Z`
`DD_GIT_COMMIT_COMMITTER_NAME` (Optional)
: The name of the committer of the commit.<br/>
**Example**: `Jane Smith`
`DD_GIT_COMMIT_COMMITTER_EMAIL` (Optional)
: The email of the committer of the commit.<br/>
**Example**: `[email protected]`
`DD_GIT_COMMIT_COMMITTER_DATE` (Optional)
: The date when the committer submitted the commit expressed in ISO 8601 format.<br/>
**Example**: `2021-08-16T15:41:45.000Z`
If you set only repository, branch and commit, the plugin will try to extract the rest of the Git information from the `.git` folder.
An example of usage:
{{< code-block lang="groovy" >}}
pipeline {
agent any
stages {
stage('Checkout') {
steps {
script {
def gitVars = git url:'https://github.com/my-org/my-repo.git', branch:'some/feature-branch'
// Setting Git information manually via environment variables.
env.DD_GIT_REPOSITORY_URL=gitVars.GIT_URL
env.DD_GIT_BRANCH=gitVars.GIT_BRANCH
env.DD_GIT_COMMIT_SHA=gitVars.GIT_COMMIT
}
}
}
stage('Test') {
steps {
// Execute the rest of the pipeline.
}
}
}
}
{{< /code-block >}}
## Customization
### Setting custom tags for your pipelines
The Datadog plugin adds a `datadog` step that allows adding custom tags to your pipeline-based jobs.
In declarative pipelines, add the step to a top-level option block:
{{< code-block lang="groovy" >}}
def DD_TYPE = "release"
pipeline {
agent any
options {
datadog(tags: ["team:backend", "type:${DD_TYPE}", "${DD_TYPE}:canary"])
}
stages {
stage('Example') {
steps {
echo "Hello world."
}
}
}
}
{{< /code-block >}}
In scripted pipelines, wrap the relevant section with the `datadog` step:
{{< code-block lang="groovy" >}}
datadog(tags: ["team:backend", "release:canary"]){
node {
stage('Example') {
echo "Hello world."
}
}
}
{{< /code-block >}}
### Setting global custom tags
You can configure the Jenkins Plugin to send custom tags in all pipeline traces:
1. In the web interface of your Jenkins instance, go to **Manage Jenkins > Configure System**.
2. Go to the `Datadog Plugin` section, scrolling down the configuration screen.
3. Click on the `Advanced` button.
4. Configure the `Global Tags`.
5. Configure the `Global Job Tags`.
6. Save your configuration.
**Global tags**
: A comma-separated list of tags to apply to all metrics, traces, events, and service checks. Tags can include environment variables that are defined in the Jenkins controller instance.<br/>
**Environment variable**: `DATADOG_JENKINS_PLUGIN_GLOBAL_TAGS`<br/>
**Example**: `key1:value1,key2:${SOME_ENVVAR},${OTHER_ENVVAR}:value3`
**Global job tags**
: A comma-separated list of regexes to match a job and a list of tags to apply to that job. Tags can include environment variables that are defined in the Jenkins controller instance. Tags can reference match groups in the regex using the `$` symbol.<br/>
**Environment variable**: `DATADOG_JENKINS_PLUGIN_GLOBAL_JOB_TAGS`<br/>
**Example**: `(.*?)_job_(.*?)_release, owner:$1, release_env:$2, optional:Tag3`
### Including or excluding pipelines
You can configure the Jenkins Plugin to include or exclude some pipelines:
1. In the web interface of your Jenkins instance, go to **Manage Jenkins > Configure System**.
2. Go to the `Datadog Plugin` section by scrolling down the configuration screen.
3. Click on the `Advanced` button.
4. Configure the `Excluded Jobs`.
5. Configure the `Included Jobs`.
6. Save your configuration.
**Excluded jobs**
: A comma-separated list of Jenkins jobs that should not be monitored. The exclusion applies to all metrics, traces, events, and service checks. Excluded jobs can use regular expressions to reference multiple jobs.<br/>
**Environment variable**: `DATADOG_JENKINS_PLUGIN_EXCLUDED`<br/>
**Example**: `susans-job,johns-.*,prod_folder/prod_release`
**Included jobs**
: A comma-separated list of Jenkins job names that should be monitored. If the included jobs list is empty, all jobs that are not excluded explicitly are monitored. The inclusion applies to all metrics, traces, events, and service checks. Included jobs can use regular expressions to reference multiple jobs.<br/>
**Environment variable**: `DATADOG_JENKINS_PLUGIN_INCLUDED`<br/>
**Example**: `susans-job,johns-.*,prod_folder/prod_release`
## Visualize pipeline data in Datadog
Once the integration is successfully configured, both the [Pipelines][7] and [Pipeline Executions][8] pages populate with data after pipelines finish.
**Note**: The Pipelines page shows data for only the default branch of each repository.
## Troubleshooting
### Enable DEBUG log level for the Datadog Plugin
If you have any issues with the Datadog Plugin, you can set the logs for the plugin in `DEBUG` level. Using this level you are able to see stacktrace details if an exception is thrown.
1. In your Jenkins instance web interface, go to **Manage Jenkins > System log**.
2. Click on `Add new log recorder` button.
3. Type the log recorder name. E.g: **Datadog Plugin Logs**.
4. Add the following loggers to the list:
- Logger: `org.datadog.jenkins.plugins.datadog.clients` -> Log Level `ALL`
- Logger: `org.datadog.jenkins.plugins.datadog.traces` -> Log Level `ALL`
- Logger: `org.datadog.jenkins.plugins.datadog.logs` -> Log Level `ALL`
- Logger: `org.datadog.jenkins.plugins.datadog.model` -> Log Level `ALL`
- Logger: `org.datadog.jenkins.plugins.datadog.listeners` -> Log Level `ALL`
5. Save the configuration.
You may also want to split the loggers into different log recorders.
Once the log recorders are successfully configured, you can check the logs in the `DEBUG` mode by accessing the desired log recorder through **Manage Jenkins > System log**.
If you trigger a Jenkins pipeline, you can search for the message `Send pipeline traces` in the **Datadog Plugin Logs**. This message indicates that the plugin is sending **CI Visibility** data to the **Datadog Agent**.
{{< code-block lang="text" >}}
Send pipeline traces.
...
Send pipeline traces.
...
{{< /code-block >}}
### The Datadog Plugin cannot write payloads to the server
If the following error message appears in the **Jenkins Log**, make sure that the plugin configuration is correct.
{{< code-block lang="text" >}}
Error writing to server
{{< /code-block >}}
1. If you are using `localhost` as the hostname, try to change it to the server hostname instead.
2. If your Jenkins instance is behind an HTTP proxy, go to **Manage Jenkins** > **Manage Plugins** > **Advanced tab** and make sure the proxy configuration is correct.
### The Datadog Plugin section does not appear in the Jenkins configuration
If the Datadog Plugin section does not appear in Jenkins configuration section, make sure that the plugin is enabled. To do so:
1. In your Jenkins instance web interface, go to **Manage Jenkins > Manage Plugins**.
2. Search for `Datadog Plugin` in the **Installed** tab.
3. Check that the `Enabled` checkbox is marked.
4. If you enable the plugin here, restart your Jenkins instance using the `/safeRestart` URL path.
### The CI Visibility option does not appear in the Datadog Plugin section.
If the CI Visibility option does not appear in the Datadog Plugin section, make sure that the correct version is installed and restart the Jenkins instance. To do so:
1. In your Jenkins instance web interface, go to **Manage Jenkins > Manage Plugins**.
2. Search for `Datadog Plugin` in the **Installed** tab.
3. Check that the installed version is correct.
4. Restart your Jenkins instance using the `/safeRestart` URL path.
### The Plugin's Tracer fails to initialize due to APM Java Tracer is being used to instrument Jenkins.
If this error message appears in the **Jenkins Log**, make sure that you are using the Jenkins plugin v3.1.0+
{{< code-block lang="text" >}}
Failed to reinitialize Datadog-Plugin Tracer, Cannot enable traces collection via plugin if the Datadog Java Tracer is being used as javaagent in the Jenkins startup command. This error will not affect your pipelines executions.
{{< /code-block >}}
## Further reading
{{< partial name="whats-next/whats-next.html" >}}
[1]: /agent/
[2]: https://docs.datadoghq.com/agent/cluster_agent/admission_controller/
[3]: https://plugins.jenkins.io/datadog/
[4]: https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Howtoinstallplugins
[5]: /agent/guide/agent-configuration-files/?tab=agentv6v7#agent-configuration-directory
[6]: /agent/guide/agent-commands/?tab=agentv6v7#restart-the-agent
[7]: https://app.datadoghq.com/ci/pipelines
[8]: https://app.datadoghq.com/ci/pipeline-executions
| 38.31528 | 313 | 0.724973 | eng_Latn | 0.884175 |
f48ec1a06eca7d6e05e860caad8d7830f62f9d63 | 1,799 | md | Markdown | leetcode/1313. Decompress Run-Length Encoded List/README.md | joycse06/LeetCode-1 | ad105bd8c5de4a659c2bbe6b19f400b926c82d93 | [
"Fair"
] | 1 | 2021-02-11T01:23:10.000Z | 2021-02-11T01:23:10.000Z | leetcode/1313. Decompress Run-Length Encoded List/README.md | aerlokesh494/LeetCode | 0f2cbb28d5a9825b51a8d3b3a0ae0c30d7ff155f | [
"Fair"
] | null | null | null | leetcode/1313. Decompress Run-Length Encoded List/README.md | aerlokesh494/LeetCode | 0f2cbb28d5a9825b51a8d3b3a0ae0c30d7ff155f | [
"Fair"
] | 1 | 2021-03-25T17:11:14.000Z | 2021-03-25T17:11:14.000Z | # [1313. Decompress Run-Length Encoded List (Easy)](https://leetcode.com/problems/decompress-run-length-encoded-list/)
<p>We are given a list <code>nums</code> of integers representing a list compressed with run-length encoding.</p>
<p>Consider each adjacent pair of elements <code>[a, b] = [nums[2*i], nums[2*i+1]]</code> (with <code>i >= 0</code>). For each such pair, there are <code>a</code> elements with value <code>b</code> in the decompressed list.</p>
<p>Return the decompressed list.</p>
<p> </p>
<p><strong>Example 1:</strong></p>
<pre><strong>Input:</strong> nums = [1,2,3,4]
<strong>Output:</strong> [2,4,4,4]
<strong>Explanation:</strong> The first pair [1,2] means we have freq = 1 and val = 2 so we generate the array [2].
The second pair [3,4] means we have freq = 3 and val = 4 so we generate [4,4,4].
At the end the concatenation [2] + [4,4,4,4] is [2,4,4,4].
</pre>
<p> </p>
<p><strong>Constraints:</strong></p>
<ul>
<li><code>2 <= nums.length <= 100</code></li>
<li><code>nums.length % 2 == 0</code></li>
<li><code><font face="monospace">1 <= nums[i] <= 100</font></code></li>
</ul>
**Related Topics**:
[Array](https://leetcode.com/tag/array/)
**Similar Questions**:
* [String Compression (Easy)](https://leetcode.com/problems/string-compression/)
## Solution 1.
```cpp
// OJ: https://leetcode.com/problems/decompress-run-length-encoded-list/
// Author: github.com/lzl124631x
// Time: O(N) where N is the length of the output array.
// Space: O(1)
class Solution {
public:
vector<int> decompressRLElist(vector<int>& nums) {
vector<int> ans;
for (int i = 0; i < nums.size(); i += 2) {
for (int j = 0; j < nums[i]; ++j) ans.push_back(nums[i + 1]);
}
return ans;
}
};
``` | 34.596154 | 246 | 0.637576 | eng_Latn | 0.657084 |
f48f4871296d11b9e1108088560784f25a80ee85 | 44 | md | Markdown | api/README.md | saisyam/rdump | 23676a4c6f3586ac67256efa22819b1932afc7aa | [
"Apache-2.0"
] | null | null | null | api/README.md | saisyam/rdump | 23676a4c6f3586ac67256efa22819b1932afc7aa | [
"Apache-2.0"
] | null | null | null | api/README.md | saisyam/rdump | 23676a4c6f3586ac67256efa22819b1932afc7aa | [
"Apache-2.0"
] | 1 | 2021-02-25T10:52:00.000Z | 2021-02-25T10:52:00.000Z | # API
API for R-Dump project built on Python | 22 | 38 | 0.772727 | eng_Latn | 0.992274 |
f48f8bc47d823cb3e49c903ee105781048486630 | 4,172 | md | Markdown | docs/framework/unmanaged-api/profiling/functionidmapper2-function.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/profiling/functionidmapper2-function.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/profiling/functionidmapper2-function.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "FunctionIDMapper2 Function"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-clr"
ms.tgt_pltfrm: ""
ms.topic: "reference"
api_name:
- "FunctionIDMapper2"
api_location:
- "mscorwks.dll"
api_type:
- "COM"
f1_keywords:
- "FunctionIDMapper2"
helpviewer_keywords:
- "FunctionIDMapper2 function [.NET Framework profiling]"
ms.assetid: 466ad51b-8f0c-41d9-81f7-371aac3374cb
topic_type:
- "apiref"
caps.latest.revision: 10
author: "mairaw"
ms.author: "mairaw"
manager: "wpickett"
ms.workload:
- "dotnet"
---
# FunctionIDMapper2 Function
Notifies the profiler that the given identifier of a function may be remapped to an alternative ID to be used in the [FunctionEnter3](../../../../docs/framework/unmanaged-api/profiling/functionenter3-function.md), [FunctionLeave3](../../../../docs/framework/unmanaged-api/profiling/functionleave3-function.md), and [FunctionTailcall3](../../../../docs/framework/unmanaged-api/profiling/functiontailcall3-function.md), or[FunctionEnter3WithInfo](../../../../docs/framework/unmanaged-api/profiling/functionenter3withinfo-function.md), [FunctionLeave3WithInfo](../../../../docs/framework/unmanaged-api/profiling/functionleave3withinfo-function.md), and [FunctionTailcall3WithInfo](../../../../docs/framework/unmanaged-api/profiling/functiontailcall3withinfo-function.md) callbacks for that function. `FunctionIDMapper2` also enables the profiler to indicate whether it wants to receive callbacks for that function.
## Syntax
```
UINT_PTR __stdcall FunctionIDMapper2 (
[in] FunctionID funcId,
[in] void * clientData,
[out] BOOL *pbHookFunction
);
```
#### Parameters
`funcId`
[in] The function identifier to be remapped.
`clientData`
[in] A pointer to data that is used to disambiguate among runtimes.
`pbHookFunction`
[out] A pointer to a value that the profiler sets to `true` if it wants to receive `FunctionEnter3`, `FunctionLeave3`, and `FunctionTailcall3`, or `FunctionEnter3WithInfo`, `FunctionLeave3WithInfo`, and `FunctionTailcall3WithInfo` callbacks; otherwise, it sets this value to `false`.
## Return Value
The profiler returns a value that the execution engine uses as an alternative function identifier. The return value cannot be null unless `false` is returned in `pbHookFunction`. Otherwise, a null return value produces unpredictable results, including possibly halting the process.
## Remarks
This method extends the [FunctionIDMapper](../../../../docs/framework/unmanaged-api/profiling/functionidmapper-function.md) function with an additional parameter that is used to pass client data. The client data is used to disambiguate among runtimes.
## Requirements
**Platforms:** See [System Requirements](../../../../docs/framework/get-started/system-requirements.md).
**Header:** CorProf.idl
**Library:** CorGuids.lib
**.NET Framework Versions:** [!INCLUDE[net_current_v40plus](../../../../includes/net-current-v40plus-md.md)]
## See Also
[ICorProfilerInfo::SetFunctionIDMapper](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-setfunctionidmapper-method.md)
[ICorProfilerInfo3::SetFunctionIDMapper2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo3-setfunctionidmapper2-method.md)
[FunctionEnter3](../../../../docs/framework/unmanaged-api/profiling/functionenter3-function.md)
[FunctionLeave3](../../../../docs/framework/unmanaged-api/profiling/functionleave3-function.md)
[FunctionTailcall3](../../../../docs/framework/unmanaged-api/profiling/functiontailcall3-function.md)
[FunctionEnter3WithInfo](../../../../docs/framework/unmanaged-api/profiling/functionenter3withinfo-function.md)
[FunctionLeave3WithInfo](../../../../docs/framework/unmanaged-api/profiling/functionleave3withinfo-function.md)
[FunctionTailcall3WithInfo](../../../../docs/framework/unmanaged-api/profiling/functiontailcall3withinfo-function.md)
[Profiling Global Static Functions](../../../../docs/framework/unmanaged-api/profiling/profiling-global-static-functions.md)
| 52.15 | 913 | 0.737776 | eng_Latn | 0.344092 |
f4901f737d740eac8fdc417cfd2bf71c574bae30 | 1,724 | md | Markdown | wdk-ddi-src/content/winsplp/ns-winsplp-branchofficelogofflinefilefull.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/winsplp/ns-winsplp-branchofficelogofflinefilefull.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/winsplp/ns-winsplp-branchofficelogofflinefilefull.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:winsplp.__unnamed_struct_4
title: BranchOfficeLogOfflineFileFull (winsplp.h)
description: Contains the necessary data for logging that the offline log archive on the current client overflowed at some point.
old-location: print\branchofficelogofflinefilefull.htm
tech.root: print
ms.assetid: 41190CE8-8779-477C-BFB0-6410DF096EFD
ms.date: 04/20/2018
keywords: ["BranchOfficeLogOfflineFileFull structure"]
ms.keywords: "*PBranchOfficeLogOfflineFileFull, BranchOfficeLogOfflineFileFull, BranchOfficeLogOfflineFileFull structure [Print Devices], PBranchOfficeLogOfflineFileFull, PBranchOfficeLogOfflineFileFull structure pointer [Print Devices], print.branchofficelogofflinefilefull, winsplp/BranchOfficeLogOfflineFileFull, winsplp/PBranchOfficeLogOfflineFileFull"
f1_keywords:
- "winsplp/BranchOfficeLogOfflineFileFull"
- "BranchOfficeLogOfflineFileFull"
req.header: winsplp.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- Winsplp.h
api_name:
- BranchOfficeLogOfflineFileFull
targetos: Windows
req.typenames: BranchOfficeLogOfflineFileFull, *PBranchOfficeLogOfflineFileFull
---
# BranchOfficeLogOfflineFileFull structure
## -description
Contains the necessary data for logging that the offline log archive on the current client overflowed at some point.
## -struct-fields
### -field pMachineName
Specifies the name of the print client.
| 27.806452 | 357 | 0.789443 | eng_Latn | 0.421613 |
f490586d503d6ca4bf44a8e08f3f5d8d098bede0 | 120 | md | Markdown | pages/participate_en.md | opengeneva/opengeneva2018 | ed1bef824407c71749b63d8590c939782d2b0a7b | [
"CC-BY-3.0"
] | null | null | null | pages/participate_en.md | opengeneva/opengeneva2018 | ed1bef824407c71749b63d8590c939782d2b0a7b | [
"CC-BY-3.0"
] | 2 | 2018-03-27T22:31:28.000Z | 2018-03-28T09:11:23.000Z | pages/participate_en.md | opengeneva/opengeneva_website | ed1bef824407c71749b63d8590c939782d2b0a7b | [
"CC-BY-3.0"
] | 2 | 2017-11-16T14:12:48.000Z | 2017-12-15T18:23:06.000Z | ---
layout: default
title: participate
lang: en
ref: participate
permalink : /participate/
---
blah | Participate
| 9.230769 | 25 | 0.7 | eng_Latn | 0.62683 |
f49077924a5427184def5d24ec12cfcdaada622f | 7,734 | md | Markdown | azurermps-4.4.1/AzureRM.DataLakeAnalytics/Get-AzureRmDataLakeAnalyticsJob.md | isra-fel/azure-docs-powershell | c7d9fc26f7b702f64e66f4ed7fc5f1051ab8a4b8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azurermps-4.4.1/AzureRM.DataLakeAnalytics/Get-AzureRmDataLakeAnalyticsJob.md | isra-fel/azure-docs-powershell | c7d9fc26f7b702f64e66f4ed7fc5f1051ab8a4b8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azurermps-4.4.1/AzureRM.DataLakeAnalytics/Get-AzureRmDataLakeAnalyticsJob.md | isra-fel/azure-docs-powershell | c7d9fc26f7b702f64e66f4ed7fc5f1051ab8a4b8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file: Microsoft.Azure.Commands.DataLakeAnalytics.dll-Help.xml
Module Name: AzureRM.DataLakeAnalytics
ms.assetid: A0293D80-5935-4D2C-AF11-2837FEC95760
online version:
schema: 2.0.0
content_git_url: https://github.com/Azure/azure-powershell/blob/preview/src/ResourceManager/DataLakeAnalytics/Commands.DataLakeAnalytics/help/Get-AzureRmDataLakeAnalyticsJob.md
original_content_git_url: https://github.com/Azure/azure-powershell/blob/preview/src/ResourceManager/DataLakeAnalytics/Commands.DataLakeAnalytics/help/Get-AzureRmDataLakeAnalyticsJob.md
---
# Get-AzureRmDataLakeAnalyticsJob
## SYNOPSIS
Gets a Data Lake Analytics job.
## SYNTAX
### All In Resource Group and Account (Default)
```
Get-AzureRmDataLakeAnalyticsJob [-Account] <String> [[-Name] <String>] [[-Submitter] <String>]
[[-SubmittedAfter] <DateTimeOffset>] [[-SubmittedBefore] <DateTimeOffset>] [[-State] <JobState[]>]
[[-Result] <JobResult[]>] [-Top <Int32>] [-PipelineId <Guid>] [-RecurrenceId <Guid>]
[-DefaultProfile <IAzureContextContainer>] [<CommonParameters>]
```
### Specific JobInformation
```
Get-AzureRmDataLakeAnalyticsJob [-Account] <String> [-JobId] <Guid> [[-Include] <ExtendedJobData>]
[-DefaultProfile <IAzureContextContainer>] [<CommonParameters>]
```
## DESCRIPTION
The **Get-AzureRmDataLakeAnalyticsJob** cmdlet gets an Azure Data Lake Analytics job.
If you do not specify a job, this cmdlet gets all jobs.
## EXAMPLES
### Example 1: Get a specified job
```
PS C:\>Get-AzureRmDataLakeAnalyticsJob -Account "contosoadla" -JobId $JobID01
```
This command gets the job with the specified ID.
### Example 2: Get jobs submitted in the past week
```
PS C:\>Get-AzureRmDataLakeAnalyticsJob -Account "contosoadla" -SubmittedAfter (Get-Date).AddDays(-7)
```
This command gets jobs submitted in the past week.
## PARAMETERS
### -Account
Specifies the name of a Data Lake Analytics account.
```yaml
Type: System.String
Parameter Sets: (All)
Aliases: AccountName
Required: True
Position: 0
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Include
Specifies options that indicate the type of additional information to retrieve about the job.
The acceptable values for this parameter are:
- None
- DebugInfo
- Statistics
- All
```yaml
Type: Microsoft.Azure.Commands.DataLakeAnalytics.Models.DataLakeAnalyticsEnums+ExtendedJobData
Parameter Sets: Specific JobInformation
Aliases:
Accepted values: None, All, DebugInfo, Statistics
Required: False
Position: 2
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -JobId
Specifies the ID of the job to get.
```yaml
Type: System.Guid
Parameter Sets: Specific JobInformation
Aliases:
Required: True
Position: 1
Default value: None
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False
```
### -Name
Specifies a name to use to filter the job list results.
The acceptable values for this parameter are:
- None
- DebugInfo
- Statistics
- All
```yaml
Type: System.String
Parameter Sets: All In Resource Group and Account
Aliases:
Required: False
Position: 1
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -PipelineId
An optional ID that indicates only jobs part of the specified pipeline should be returned.
```yaml
Type: System.Nullable`1[System.Guid]
Parameter Sets: All In Resource Group and Account
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -RecurrenceId
An optional ID that indicates only jobs part of the specified recurrence should be returned.
```yaml
Type: System.Nullable`1[System.Guid]
Parameter Sets: All In Resource Group and Account
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Result
Specifies a result filter for the job results.
The acceptable values for this parameter are:
- None
- Cancelled
- Failed
- Succeeded
```yaml
Type: Microsoft.Azure.Management.DataLake.Analytics.Models.JobResult[]
Parameter Sets: All In Resource Group and Account
Aliases:
Accepted values: None, Succeeded, Cancelled, Failed
Required: False
Position: 6
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -State
Specifies a state filter for the job results.
The acceptable values for this parameter are:
- Accepted
- New
- Compiling
- Scheduling
- Queued
- Starting
- Paused
- Running
- Ended
```yaml
Type: Microsoft.Azure.Management.DataLake.Analytics.Models.JobState[]
Parameter Sets: All In Resource Group and Account
Aliases:
Accepted values: Accepted, Compiling, Ended, New, Queued, Running, Scheduling, Starting, Paused, WaitingForCapacity
Required: False
Position: 5
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -SubmittedAfter
Specifies a date filter.
Use this parameter to filter the job list result to jobs submitted after the specified date.
```yaml
Type: System.Nullable`1[System.DateTimeOffset]
Parameter Sets: All In Resource Group and Account
Aliases:
Required: False
Position: 3
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -SubmittedBefore
Specifies a date filter.
Use this parameter to filter the job list result to jobs submitted before the specified date.
```yaml
Type: System.Nullable`1[System.DateTimeOffset]
Parameter Sets: All In Resource Group and Account
Aliases:
Required: False
Position: 4
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Submitter
Specifies the email address of a user.
Use this parameter to filter the job list results to jobs submitted by a specified user.
```yaml
Type: System.String
Parameter Sets: All In Resource Group and Account
Aliases:
Required: False
Position: 2
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Top
An optional value which indicates the number of jobs to return. Default value is 500
```yaml
Type: System.Nullable`1[System.Int32]
Parameter Sets: All In Resource Group and Account
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -DefaultProfile
The credentials, account, tenant, and subscription used for communication with azure.
```yaml
Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.IAzureContextContainer
Parameter Sets: (All)
Aliases: AzureRmContext, AzureCredential
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### Guid
Parameter 'JobId' accepts value of type 'Guid' from the pipeline
## OUTPUTS
### JobInformation
The specified job information details
### List<JobInformation>
The list of jobs in the specified Data Lake Analytics account.
## NOTES
## RELATED LINKS
[Stop-AzureRmDataLakeAnalyticsJob](./Stop-AzureRmDataLakeAnalyticsJob.md)
[Submit-AzureRmDataLakeAnalyticsJob](./Submit-AzureRmDataLakeAnalyticsJob.md)
[Wait-AzureRmDataLakeAnalyticsJob](./Wait-AzureRmDataLakeAnalyticsJob.md)
| 24.868167 | 315 | 0.78045 | eng_Latn | 0.407269 |
f491340aff309ace8b9b400a860cbea4719d7089 | 2,397 | md | Markdown | _posts/2021-03-19-degrees-of-freedom.md | vzahorui/vzahorui.github.io | 1f609d07eda64fb3e1fb5c2afbee775e40067aba | [
"MIT"
] | null | null | null | _posts/2021-03-19-degrees-of-freedom.md | vzahorui/vzahorui.github.io | 1f609d07eda64fb3e1fb5c2afbee775e40067aba | [
"MIT"
] | null | null | null | _posts/2021-03-19-degrees-of-freedom.md | vzahorui/vzahorui.github.io | 1f609d07eda64fb3e1fb5c2afbee775e40067aba | [
"MIT"
] | null | null | null | ---
layout: single
title: "Degrees of freedom in statistics"
description: explaining Degrees of freedom
category: "Probability"
tags: residuals variance mean linear-model
date: 2021-03-21
---
In statistics degrees of freedom is the number of independent elements forming a final statistic which are free to vary without violation of an imposed constraint.
Suppose we have a sample of $n$ observations, and we know the mean of the sample. In this case $(n-1)$ elements of the sample may be any real numbers but the last remaining element is strictly determined by the other elements and the mean (the initial constraint). Hence the number of degrees of freedom for the set of observations is $(n-1)$. The mean itself has $n$ degrees of freedom because all $n$ elements used to form this statistic can be anything.
When estimating the variance of population using sample data, in place of the mean of population we use the estimate of it - the mean of the sample. The sum of residuals $(x_i - \bar x)$ is always equal to zero. Knowing the values of $(n-1)$ residuals it is possible to calculate the remaining one, thus there are $(n-1)$ degrees of freedom for residuals, and, as a consequence, for the variance as well. Another way to think about it is to imagine a sample size of 1. In this case there will be no residuals as the only observed value will be equal to the mean of this sample. However if we add just one additional observation then we are giving freedom to the residuals, because the values now may deviate from their mean.
## In linear models
In linear regression the number of degrees of freedom for the error term is calculated like $(n-p-1)$, where $p$ is the number of independent variables - the dimensions. Here is one way to build intuition about it. If we have $n$ observations and $n$ dimensions then a hyperplane can be perfectly fit to the points representing these observations. In other words we have a system of linear equations with unique solution which have $n$ equations and $(n+1)$ parameters - coefficients at each independent variable plus intercept. This system has no error term so in order to introduce freedom for the variance of error in it, at least one additional observation should be added. Hence, the minimal number of observations for the freedom is $(p+2)$ where $p$ is reserved for the coefficients at the variables, and 1 - for the intercept. | 133.166667 | 834 | 0.780559 | eng_Latn | 0.999923 |
f4918b1c4f820cc033777b7cceb9c317144d9469 | 984 | md | Markdown | _sds/basics/infrastructure/onpremise/dockerCompose/ec2.md | r-e-x-a-g-o-n/scalable-data-science | a97451a768cf12eec9a20fbe5552bbcaf215d662 | [
"Unlicense"
] | 138 | 2017-07-25T06:48:28.000Z | 2022-03-31T12:23:36.000Z | _sds/basics/infrastructure/onpremise/dockerCompose/ec2.md | r-e-x-a-g-o-n/scalable-data-science | a97451a768cf12eec9a20fbe5552bbcaf215d662 | [
"Unlicense"
] | 11 | 2017-08-17T13:45:54.000Z | 2021-06-04T09:06:53.000Z | _sds/basics/infrastructure/onpremise/dockerCompose/ec2.md | r-e-x-a-g-o-n/scalable-data-science | a97451a768cf12eec9a20fbe5552bbcaf215d662 | [
"Unlicense"
] | 74 | 2017-08-18T17:04:46.000Z | 2022-03-21T14:30:51.000Z | # hsCompose on AWS EC2 #
## Single instance ##
- Start any Ubuntu 18.04 EC2 instance with at least 8GB of RAM (16 recommended)
and enough disk space for your data sets. Remember to publish all the required
ports:
* 8080
* 8088
* 8888
* 50070
* 8042
* 4040
* 4041
* 9092
* 2181 (maybe)
- ssh to your instance using the username `ubuntu`
- Install `docker` and `pip`: `sudo apt install -y docker.io python3-pip`
- Add the `ubuntu` user to the `docker` group: `sudo usermod -aG docker ubuntu`
- Restart the instance to update `docker` priviliges for the `ubuntu` user and
ssh into it again. (This can be skipped but then all docker commands need to
be run with sudo)
- Install `docker-compose`: `sudo pip3 install docker-compose`
- Clone the `hsCompose` repo: `git clone https://gitlab.com/dlilja/hscompose.git`
- `cd` into the `hscompose` folder.
- Pull all the images: `docker-compose pull`
- Start services: `docker-compose up -d`
## Scalable cluster ##
| 32.8 | 81 | 0.706301 | eng_Latn | 0.942464 |
f491ade86ff1e2725888588b50a50ed3956fe6c8 | 4,324 | md | Markdown | Backup/JWTAuthenticationQuickStart.md | amandahuarng/adobeio-auth | 920bc6232a6443d5cb6e90ea8f0993caa2bf29e7 | [
"MIT"
] | 32 | 2019-02-16T04:18:00.000Z | 2021-11-24T15:14:35.000Z | Backup/JWTAuthenticationQuickStart.md | amandahuarng/adobeio-auth | 920bc6232a6443d5cb6e90ea8f0993caa2bf29e7 | [
"MIT"
] | 47 | 2019-02-05T22:22:22.000Z | 2021-11-02T20:22:13.000Z | Backup/JWTAuthenticationQuickStart.md | isabella232/adobeio-auth | 70ffbcac0dccf0de07ba955c3a4cd3b3356e83bd | [
"MIT"
] | 85 | 2018-12-18T22:38:07.000Z | 2022-01-11T09:43:54.000Z | # JWT Authentication Quick Start
API services tied to entitled Adobe products (such as Campaign, Target, Launch, and others) require a JSON Web Token (JWT) to retrieve access tokens for usage against authenticated endpoints. This document serves as a quickstart guide for first-time users.
## Things to check before you start:
You must be an organizational admin for your enterprise organization with the appropriate Adobe products entitled. (Check at https://adminconsole.adobe.com/{your\_org\_id}@AdobeOrg/overview)
## Steps to get a JWT and an access token
Regardless of your platform, you begin with the same steps in Adobe Developer Console:
1. Create a new integration in Adobe Developer Console: [https://www.adobe.com/go/devs_console_ui/integrations](https://www.adobe.com/go/devs_console_ui/integrations)
![Create integration](Images/auth_jwtqs_01.png "Create an integration")
2. Choose to access an API.
3. Subscribe to an entitled product (for instance, Launch, by Adobe).
![Subscribe service](Images/auth_jwtqs_02.png "Subscribe to a product or service")
4. Confirm that you want to create a new integration.
5. Create a private key and a public certificate. Make sure you store these securely.
Once you complete the above steps, your path diverges depending on your platform:
_**MacOS and Linux:**_
Open a terminal and execute the following command:
`openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out certificate_pub.crt`
![Generate public certificate](Images/auth_jwtqs_00.png "Generate Public certificate")
_**Windows:**_
1. Download an OpenSSL client to generate public certificates; for example, you can try the [OpenSSL Windows client](https://bintray.com/vszakats/generic/download_file?file_path=openssl-1.1.1-win64-mingw.zip).
2. Extract the folder and copy it to the C:/libs/ location.
3. Open a command line window and execute the following commands:
`set OPENSSL_CONF=C:/libs/openssl-1.1.1-win64-mingw/openssl.cnf`
`cd C:/libs/openssl-1.1.1-win64-mingw/`
`openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out certificate_pub.crt`
![Generate public certificate windows](Images/auth_jwtqs_000.png "Generate Public certificate windows")
Once you’ve complete the steps for your chosen platform, continue in the Adobe Developer Console:
6. Upload the public certificate (certificate_pub.crt) as a part of creating the integration.
![Upload public certificate](Images/auth_jwtqs_03.png "Upload public certificate")
7. Your integration should now be created with the appropriate public certificate and claims.
![Integration created](Images/auth_jwtqs_04.png "Integration created")
8. Go to the JWT tab and paste in you private key to generate a JWT.
![JWT tab](Images/auth_jwtqs_05.png "JWT tab")
9. Copy the “Sample CURL Command” to get your first access token.
![Get access token](Images/auth_jwtqs_06.png "Get access token")
10. Open Postman, then click Import > Paste Raw Text and paste the copied curl command.
![Postman import](Images/auth_jwtqs_07.png "Postman import")
11. Click Send.
![Postman send](Images/auth_jwtqs_08.png "Postman send")
The example curl sends a POST request to [https://ims-na1.adobelogin.com/ims/exchange/jwt](https://ims-na1.adobelogin.com/ims/exchange/jwt) with the following parameters.
| Parameter | Description|
|---|---|
| `client_id` | The API key generated for your integration. Find this on the I/O Console for your integration. |
| `client_secret` | The client secret generated for your integration. Find this on the I/O Console for your integration. |
| `jwt_token` | A base-64 encoded JSON Web Token that encapsulates the identity of your integration, signed with a private key that corresponds to a public key certificate attached to the integration. Generate this on the I/O Console in the JWT Tab for your integration. Note that this token has the expiration time parameter `exp` set to 24 hours from when the token is generated. |
You now have your access token. Look up documentation for the specific API service for which you’re hitting authenticated endpoints to find what other parameters are expected. Most of them need an `x-api-key`, which will be the same as your `client_id`.
| 53.382716 | 384 | 0.771508 | eng_Latn | 0.954145 |
f491f5a0d12851cb9a72332083f4b412b4be23ff | 3,015 | md | Markdown | docs/ExternalConnectivity.md | tomdee/calico-containers | 09abe8353456acc2cec8fe55b7cfe07a4d8cad2f | [
"Apache-2.0"
] | null | null | null | docs/ExternalConnectivity.md | tomdee/calico-containers | 09abe8353456acc2cec8fe55b7cfe07a4d8cad2f | [
"Apache-2.0"
] | 3 | 2015-07-30T19:18:58.000Z | 2015-07-30T23:32:19.000Z | docs/ExternalConnectivity.md | tomdee/calico-docker | 09abe8353456acc2cec8fe55b7cfe07a4d8cad2f | [
"Apache-2.0"
] | null | null | null | <!--- master only -->
> ![warning](images/warning.png) This document applies to the HEAD of the calico-containers source tree.
>
> View the calico-containers documentation for the latest release [here](https://github.com/projectcalico/calico-containers/blob/v0.14.0/README.md).
<!--- else
> You are viewing the calico-containers documentation for release **release**.
<!--- end of master only -->
# External Connectivity - Hosts on their own Layer 2 segment
Calico creates a routed network on which your containers look like normal IP
speakers. You can connect to them from a host in your cluster (assuming the
network policy you've assigned allows this) using their IP address.
In order to access your containers from locations other than your cluster of
hosts, you will need to configure IP routing.
A common scenario is for your container hosts to be on their own isolated layer
2 network, like a rack in your server room or an entire data center. Access to
that network is via a router, which also is the default router for all the
container hosts.
![hosts-on-layer-2-network](images/hosts-on-layer-2-network.png)
If this describes your infrastructure, you'll need to configure that router to
communicate with the Calico-enabled hosts over BGP. If you have a small number
of hosts, you can configure BGP sessions between your router and each
Calico-enabled host.
The Calico network defaults to AS number 64511, which is in the private range,
and therefore will not conflict with anything else on the public internet.
However, if your organization is already using AS number 64511, you should
change the Calico cluster to use a different private AS number. See the
[BGP Configuration tutorial](bgp.md) for how to do this.
Then, on one of your Calico-enabled hosts, configure the session to your
router. Let's say your router's IP address is 192.20.30.40 and it is in AS
number 64567:
$ calicoctl bgp peer add 192.20.30.40 as 64567
You only need to do this on one Calico-enabled host; you have configured a
global BGP peer and every host in your cluster will attempt to peer with it
(see the [BGP Configuration tutorial](bgp.md) for more detail).
Lastly, you'll need to configure your router. Consult your router's
configuration guide for the exact steps, but generally speaking, you'll need to
1. enable BGP if it hasn't already been enabled
2. configure each Calico-enabled host as a BGP peer
3. announce the range of IP addresses used by your containers to the rest of your network
If you have a L3 routed fabric or some other scenario not covered by the above,
detailed datacenter networking recommendations are given in the main
[Project Calico documentation](http://docs.projectcalico.org/en/latest/index.html).
We'd also encourage you to [get in touch](http://www.projectcalico.org/contact/)
to discuss your environment.
[![Analytics](https://ga-beacon.appspot.com/UA-52125893-3/calico-containers/docs/ExternalConnectivity.md?pixel)](https://github.com/igrigorik/ga-beacon)
| 51.101695 | 152 | 0.779104 | eng_Latn | 0.998796 |
f4920ef8eddb81bd03598e427fc86d187c51f8ef | 671 | md | Markdown | examples/v1beta1/mnist_with_summaries/README.md | zionwu/tf-operator | 07b3f745df3c4c4d7e3efd35ccb3f9a65c13f9df | [
"Apache-2.0"
] | null | null | null | examples/v1beta1/mnist_with_summaries/README.md | zionwu/tf-operator | 07b3f745df3c4c4d7e3efd35ccb3f9a65c13f9df | [
"Apache-2.0"
] | 2 | 2022-02-26T03:26:18.000Z | 2022-03-08T23:35:16.000Z | examples/v1beta1/mnist_with_summaries/README.md | zionwu/tf-operator | 07b3f745df3c4c4d7e3efd35ccb3f9a65c13f9df | [
"Apache-2.0"
] | null | null | null | ### Simple mnist example with persistent volume
This is a simple example using an MNIST model that outputs a TF summary.
The example also mounts a persistent volume for output, making it suitable
for integrating with other components like Katib.
The source code is borrowed from TensorFlow tutorials [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py).
To build this image:
```shell
docker build -f Dockerfile -t kubeflow/tf-mnist-with-summaries:1.0 ./
```
Usage:
1. Add the persistent volume and claim: `kubectl apply -f tfevent-volume/.`
1. Deploy the TFJob: `kubectl apply -f tf_job_mnist.yaml`
| 39.470588 | 175 | 0.786885 | eng_Latn | 0.979733 |
f4925b6da71a1fa45b41e295616451f6afc46889 | 7,513 | md | Markdown | articles/azure-monitor/alerts/alerts-common-schema.md | Artaggedon/azure-docs.es-es | 73e6ff211a5d55a2b8293a4dc137c48a63ed1369 | [
"CC-BY-4.0",
"MIT"
] | 66 | 2017-07-09T03:34:12.000Z | 2022-03-05T21:27:20.000Z | articles/azure-monitor/alerts/alerts-common-schema.md | Artaggedon/azure-docs.es-es | 73e6ff211a5d55a2b8293a4dc137c48a63ed1369 | [
"CC-BY-4.0",
"MIT"
] | 671 | 2017-06-29T16:36:35.000Z | 2021-12-03T16:34:03.000Z | articles/azure-monitor/alerts/alerts-common-schema.md | Artaggedon/azure-docs.es-es | 73e6ff211a5d55a2b8293a4dc137c48a63ed1369 | [
"CC-BY-4.0",
"MIT"
] | 171 | 2017-07-25T06:26:46.000Z | 2022-03-23T09:07:10.000Z | ---
title: Esquema de alertas comunes para las alertas de Azure Monitor
description: Descripción del esquema de alertas comunes, por qué debería usarlo y cómo habilitarlo
ms.topic: conceptual
ms.date: 03/14/2019
ms.openlocfilehash: ea05c010ff9ee732302054a07c8157e02e3e0034
ms.sourcegitcommit: 02d443532c4d2e9e449025908a05fb9c84eba039
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 05/06/2021
ms.locfileid: "108739806"
---
# <a name="common-alert-schema"></a>Esquema de alertas comunes
Este artículo describe qué es el esquema de alertas comunes, las ventajas de usarlo y cómo habilitarlo.
## <a name="what-is-the-common-alert-schema"></a>¿Qué es el esquema de alertas comunes?
Actualmente, el esquema de alerta común normaliza la experiencia de consumo de notificaciones de alerta en Azure. Históricamente, los tres tipos de alerta de Azure (métrica, registro y registro de actividad) han tenido sus propias plantillas de correo electrónico, esquemas de webhook, etc. Con el esquema de alerta comunes, ahora puede recibir notificaciones de alertas con un esquema coherente.
Cualquier instancia de alerta describe **el recurso que resultó afectado** y **la causa de la alerta**, y estas instancias se describen en el esquema común en las secciones siguientes:
* **Información esencial**: Un conjunto de **campos estandarizados** común a todos los tipos de alertas, que describen en **qué recurso** se encuentra la alerta junto con otros metadatos de alerta comunes (por ejemplo, gravedad o descripción).
* **Contexto de alerta**: Un conjunto de campos que describen la **causa de la alerta**, con campos que varían **según el tipo de alerta**. Por ejemplo, una alerta de métrica tendría campos como el nombre de la métrica y el valor de la métrica en el contexto de la alerta, mientras que una alerta de registro de actividad tendría información sobre el evento que generó la alerta.
Entre los escenarios de integración típicos que nos cuentan nuestros clientes está el enrutamiento de la instancia de alerta al equipo correspondiente según alguna variable dinámica (por ejemplo, grupo de recursos), después de que el equipo responsable empiece a trabajar en ella. Con el esquema de alertas comunes, puede tener una lógica de enrutamiento estandarizada con tipos de alerta al utilizar los campos esenciales, dejando los campos de contexto tal cual para que los equipos afectados los investiguen en profundidad.
Esto significa que puede tener potencialmente menos integraciones, con lo que el proceso de administrarlas y mantenerlas será una tarea _mucho_ más sencilla. Además, los enriquecimientos de carga de alertas futuras (por ejemplo, personalización, enriquecimiento de diagnóstico, etc.) solo se detectarán en el esquema común.
## <a name="what-enhancements-does-the-common-alert-schema-bring"></a>¿Qué mejoras aporta el esquema de alertas comunes?
El esquema de alertas comunes se manifestará principalmente en las notificaciones de alertas. Las mejoras que verá son las siguientes:
| Acción | Mejoras|
|:---|:---|
| Email | Una plantilla de correo electrónico detallada y coherente, que le permite diagnosticar fácilmente los problemas de un vistazo. Vínculos profundos insertados en la instancia de alerta en el portal y los recursos afectados garantizan que pueda pasar rápidamente al proceso de corrección. |
| Webhook/Logic App/Azure Functions/Runbook de Automation | Una estructura JSON coherente para todos los tipos de alerta, que le permite crear fácilmente integraciones entre los diferentes tipos de alerta. |
El nuevo esquema también permitirá una experiencia de consumo de alertas más rica tanto en Azure Portal como en Azure Mobile App en el futuro inmediato.
[Más información sobre las definiciones de esquema para webhooks/Logic Apps/Azure Functions/runbook de Automation.](./alerts-common-schema-definitions.md)
> [!NOTE]
> Las siguientes acciones no admiten el esquema de alertas comunes: Conector ITSM.
## <a name="how-do-i-enable-the-common-alert-schema"></a>¿Cómo habilito el esquema de alertas comunes?
Puede participar o dejar de participar en el esquema de alertas comunes a través de grupos de acciones, en el portal y la API REST. El botón de alternancia para cambiar al nuevo esquema existe en el nivel de acción. Por ejemplo, debe expresar por separado la intención de recibir una acción de correo electrónico y una acción de webhook.
> [!NOTE]
> 1. Los siguientes tipos de alerta admiten el esquema común de forma predeterminada (no es necesario expresar la participación):
> * Alertas de detección inteligente
> 1. Los siguientes tipos de alerta no admiten actualmente el esquema común:
> * Alertas generadas por [VM Insights](../vm/vminsights-overview.md)
> * Las alertas generadas por [Azure Cost Management](../../cost-management-billing/manage/cost-management-budget-scenario.md)
### <a name="through-the-azure-portal"></a>Mediante Azure Portal
![Participación en el esquema de alertas comunes](media/alerts-common-schema/portal-opt-in.png)
1. Abra una acción existente o una nueva en un grupo de acciones.
1. Seleccione "Sí" para que el botón de alternancia habilite el esquema de alerta comunes, como se muestra.
### <a name="through-the-action-groups-rest-api"></a>Mediante la API REST de grupos de acciones
También puede usar la [API de grupos de acciones](/rest/api/monitor/actiongroups) para participar en el esquema de alertas comunes. Al realizar la llamada de [creación o actualización](/rest/api/monitor/actiongroups/createorupdate) de la API REST, puede establecer la marca "useCommonAlertSchema" en "true" (para participar) o "false" (para dejar de participar) para cualquiera de las siguientes acciones: correo electrónico/webhook/Logic Apps/función de Azure Functions/runbook de Automation.
Por ejemplo, el siguiente cuerpo de la solicitud realizado en la API REST de [creación o actualización](/rest/api/monitor/actiongroups/createorupdate) hará lo siguiente:
* Habilitar el esquema de alertas comunes para la acción de correo electrónico "John Doe's email"
* Deshabilitar el esquema de alertas comunes para la acción de correo electrónico "Jane Smith's email"
* Habilitar el esquema de alertas comunes para la acción de webhook "Sample webhook"
```json
{
"properties": {
"groupShortName": "sample",
"enabled": true,
"emailReceivers": [
{
"name": "John Doe's email",
"emailAddress": "[email protected]",
"useCommonAlertSchema": true
},
{
"name": "Jane Smith's email",
"emailAddress": "[email protected]",
"useCommonAlertSchema": false
}
],
"smsReceivers": [
{
"name": "John Doe's mobile",
"countryCode": "1",
"phoneNumber": "1234567890"
},
{
"name": "Jane Smith's mobile",
"countryCode": "1",
"phoneNumber": "0987654321"
}
],
"webhookReceivers": [
{
"name": "Sample webhook",
"serviceUri": "http://www.example.com/webhook",
"useCommonAlertSchema": true
}
]
},
"location": "Global",
"tags": {}
}
```
## <a name="next-steps"></a>Pasos siguientes
- [Definiciones del esquema de alertas comunes para webhooks/Logic Apps/Azure Functions/runbook de Automation.](./alerts-common-schema-definitions.md)
- [Obtenga información sobre cómo crear una aplicación lógica que aproveche el esquema común de alertas para controlar todas las alertas.](./alerts-common-schema-integrations.md)
| 61.081301 | 526 | 0.758286 | spa_Latn | 0.987888 |
f4930570a6b7995831de7e49f70860414718f86a | 3,767 | md | Markdown | azps-5.0.0/Az.Network/New-AzLoadBalancerBackendAddressConfig.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-05T17:58:35.000Z | 2020-12-05T17:58:35.000Z | azps-5.0.0/Az.Network/New-AzLoadBalancerBackendAddressConfig.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azps-5.0.0/Az.Network/New-AzLoadBalancerBackendAddressConfig.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file: Microsoft.Azure.PowerShell.Cmdlets.Network.dll-Help.xml
Module Name: Az.Network
online version: https://docs.microsoft.com/en-us/powershell/module/az.network/new-azloadbalancerbackendaddressconfig
schema: 2.0.0
content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/New-AzLoadBalancerBackendAddressConfig.md
original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/New-AzLoadBalancerBackendAddressConfig.md
ms.openlocfilehash: 7d373092f850dd25abe5b6d913ddcf2572d4be94
ms.sourcegitcommit: b4a38bcb0501a9016a4998efd377aa75d3ef9ce8
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/27/2020
ms.locfileid: "94324622"
---
# New-AzLoadBalancerBackendAddressConfig
## SYNOPSIS
Yük dengeleyici arka uç adres yapılandırmasını döndürür.
## INDEKI
```
New-AzLoadBalancerBackendAddressConfig -IpAddress <String> -Name <String> -VirtualNetworkId <String>
[-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>]
```
## Tanım
Yük dengeleyici arka uç adres yapılandırmasını döndürür.
## ÖRNEKLERDEN
### Örnek 1
### Örnek 2: sanal ağ başvurusuna sahip yeni loadbalancer adresi yapılandırması
```powershell
PS C:\> $virtualNetwork = Get-AzVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup
New-AzLoadBalancerBackendAddressConfig -IpAddress "10.0.0.5" -Name "TestVNetRef" -VirtualNetworkId $virtualNetwork.Id
```
## PARAMETRELERINE
### -DefaultProfile
Azure ile iletişim için kullanılan kimlik bilgileri, hesap, kiracı ve abonelik.
```yaml
Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer
Parameter Sets: (All)
Aliases: AzContext, AzureRmContext, AzureCredential
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -IPAddress
Arka uç havuzuna eklenecek IP adresi
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Ad
Arka uç adres yapılandırmasının adı
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Virtualnetworkıd
Arka uç adres yapılandırması ile ilişkili sanal ağ
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Onay
Cmdlet 'i çalıştırmadan önce onaylamanızı ister.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases: cf
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -WhatIf
Cmdlet çalışırsa ne olacağını gösterir. Cmdlet çalışmaz.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases: wi
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
Bu cmdlet ortak parametreleri destekler:-Debug,-ErrorAction,-ErrorVariable,-ınformationaction,-ınformationvariable,-OutVariable,-OutBuffer,-Pipelinedeğişken,-verbose,-WarningAction ve-Warningdeğişken. Daha fazla bilgi için [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216)bakın.
## GÖLGELENDIRICI
### System. String
### Microsoft. Azure. Commands. Network. modeller. PSVirtualNetwork
## ÇıKıŞLAR
### Microsoft. Azure. Commands. Network. model. PSLoadBalancerBackendAddress
## NOTLARıNDA
## ILGILI BAĞLANTıLAR
| 25.62585 | 300 | 0.797982 | yue_Hant | 0.367517 |
f4930ee1e4f4e52d05a08e0d9777aba5cad6dc43 | 268 | md | Markdown | specification/workflow-yaml/logging.md | direktiv/direktiv | 74d3eedf437790d32390408b89ed5299a355ca16 | [
"Apache-2.0"
] | 40 | 2021-10-30T18:44:25.000Z | 2022-03-06T20:26:14.000Z | specification/workflow-yaml/logging.md | direktiv/direktiv | 74d3eedf437790d32390408b89ed5299a355ca16 | [
"Apache-2.0"
] | 76 | 2021-10-31T01:59:27.000Z | 2022-03-31T22:40:51.000Z | specification/workflow-yaml/logging.md | direktiv/direktiv | 74d3eedf437790d32390408b89ed5299a355ca16 | [
"Apache-2.0"
] | 8 | 2021-11-01T17:10:15.000Z | 2022-03-17T05:36:26.000Z | # Logging
All states can write to instance logs via a common field `log`. This field uses [structured jx](../instance-data/structured-jx.md) to support querying instance data and inserting it into the logs.
```yaml
- id: a
type: noop
log: 'Hello, world!'
```
| 24.363636 | 197 | 0.705224 | eng_Latn | 0.993775 |