content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Other Deployment Options While gathering information about your environment before and during deployment, InsightIDR provides support for organizations that use the following: Azure Deployment If you use Microsoft Azure in your environment, see the following pages for instructions on how to connect InsightIDR to your Azure infrastructure and collect the following corresponding data sources: - LDAP user and account data - Active Directory authentication and admin activity - DHCP Hostname to IP mapping InsightIDR fully supports Windows assets running in a hybrid cloud, an on-premises domain, or a cloud-only domain model. However, InsightIDR only partially supports Linux deployments in these scenarios. Deploy in Multi-Domain Environments If you have more than one Active Directory in your Windows environment, specify which domain is your default domain in order to more accurately detect users across domains and resolve any issues with user accounts. For instance, if your company has DomainA and DomainB, but both domains have a user called John Smith, a default domain specifies which user the activity originated from. In this example, the default domain is DomainA. If InsightIDR receives data from John Smith that does not specify the domain, InsightIDR attributes data to John Smith from DomainA. If you do not configure a default domain, InsightIDR may incorrectly attribute user information. Applicable Event Sources You can configure default domains for the following event source categories: For each configured event source, there is an option under “Advanced Event Source Settings” to specify which domain is your default.
https://docs.rapid7.com/insightidr/other-deployment-options/
2021-01-16T03:59:12
CC-MAIN-2021-04
1610703499999.6
[array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/insightidr/images/Screen Shot 2018-08-08 at 1.01.40 PM.png', None], dtype=object) ]
docs.rapid7.com
Password reset Summary - User must be assigned one of the roles listed above. - User must have two-step verification enabled. - Account must not be locked. 1. From the sign in page, click “Problems accessing MyST?“. This will display further options. Then click “Forgotten your password?“. 2. Enter your username into the prompt provided and click the “Reset password” button. You will now receive an email that provides instructions on how to reset your password. It will be sent from “[email protected]” with subject “Reset MyST password”. 3. In the password reset email, click “Reset password“. You will be taken to a secure web page where you can enter a new password for your account. 4. Enter a new password for your account and then re-enter it in the next text box to confirm. 5. Click “Reset password” when you are finished to assign the new password. 6. If successful, the browser window will display a message on the page to confirm this. You can now proceed to sign in with your new password. TestingSite users
https://docs.trustpayments.com/document/myst/password-reset/
2021-01-16T02:46:57
CC-MAIN-2021-04
1610703499999.6
[array(['https://docs.trustpayments.com/wp-content/uploads/2016/09/info5-64x64.png', 'Info'], dtype=object) array(['https://docs.trustpayments.com/wp-content/uploads/2016/09/email5-64x64.png', 'Envelope'], dtype=object) array(['https://docs.trustpayments.com/wp-content/uploads/2016/09/clock5-64x64.png', 'Clock'], dtype=object) array(['https://docs.trustpayments.com/wp-content/uploads/2016/09/info5-64x64.png', 'Info'], dtype=object) ]
docs.trustpayments.com
DataPackageOperation DataPackageOperation DataPackageOperation Enum Definition Specifies the operation to perform on the DataPackage in clipboard and drag and drop scenarios. public enum DataPackageOperation public enum DataPackageOperation Public Enum DataPackageOperation - Attributes - System.FlagsAttribute ContractVersionAttribute Windows 10 requirements Remarks If your app supports the exchange of data through clipboard and drag and drop, you need to specify what type operation the user wants. The available operations are none/no action, copy, move, and link. Many existing controls, such as the text box control, include support for Clipboard actions. Before implementing your own support for these actions, check to see if they are already supported.
https://docs.microsoft.com/en-us/uwp/api/Windows.ApplicationModel.DataTransfer.DataPackageOperation
2017-04-23T06:17:49
CC-MAIN-2017-17
1492917118477.15
[]
docs.microsoft.com
Overview of event dispatch and subscribing To dispatch an event, call the ::dispatch() method on the 'event_dispatcher' service (see the Services topic for more information about how to interact with services). The first argument is the unique event name, which you should normally define as a constant in a separate static class (see and for examples). The second argument is a object; normally you will need to extend this class, so that your event class can provide data to the event subscribers. Here are the steps to register an event subscriber:.
http://drupal8docs.diasporan.net/d1/ddf/group__events.html
2017-04-23T05:34:51
CC-MAIN-2017-17
1492917118477.15
[]
drupal8docs.diasporan.net
Overview of the Migration API, which migrates data into Drupal. Migration is an Extract, Transform, Load (ETL) process. For historical reasons, in the Drupal migration tool the extract phase is called "source", the transform phase is called "process", and the load phase is called "destination". Source, process, and destination phases are each provided by plugins. Source plugins extract data from a data source in "rows", containing "properties". Each row is handed off to one or more series of process plugins, where each series operates to transform the row data into one result property. After all the properties are processed, the resulting row is handed off to a destination plugin, which saves the data. The Migrate module provides process plugins for common operations (setting default values, mapping values, etc.), and destination plugins for Drupal core objects (configuration, entity, URL alias, etc.). The Migrate Drupal module provides source plugins to extract data from various versions of Drupal. Custom and contributed modules can provide additional plugins; see the Plugin API topic for generic information about providing plugins, and sections below for details about the plugin types. The configuration of migrations is stored in configuration entities, which list the IDs and configurations of the plugins that are involved. See Migration configuration entities below for details. To migrate an entire site, you'll need to create a migration manifest; see Migration manifests for details. has more complete information on the Migration API, including information on load plugins, which are only used in Drupal 6 migration. Migration source plugins implement and usually extend . They are annotated with annotation, and must be in namespace subdirectory Plugin under the namespace of the module that defines them. Migration source plugins are managed by the class. Migration process plugins implement and usually extend . They are annotated with annotation, and must be in namespace subdirectory Plugin under the namespace of the module that defines them. Migration process plugins are managed by the class. Migration destination plugins implement and usually extend . They are annotated with annotation, and must be in namespace subdirectory Plugin under the namespace of the module that defines them. Migration destination plugins are managed by the class. The definition of how to migrate each type of data is stored in configuration entities. The migration configuration entity class is , with interface ; the configuration schema can be found in the migrate.schema.yml file. Migration configuration consists of IDs and configuration for the source, process, and destination plugins, as well as information on dependencies. Process configuration consists of sections, each of which defines the series of process plugins needed for one destination property. You can find examples of migration configuration files in the core/modules/migrate_drupal/config/install directory. You can run a migration with the "drush migrate-manifest" command, providing a migration manifest file. This file lists the configuration names of the migrations you want to execute, as well as any dependencies they have (you can find these in the "migration_dependencies" sections of the individual configuration files). For example, to migrate blocks from a Drupal 6 site, you would list: Allows adding data to a row before processing it. For example, filter module used to store filter format settings in the variables table which now needs to be inside the filter format config file. So, it needs to be added here. hook_migrate_MIGRATION_ID_prepare_row() is also available. References Row\getSourceProperty(), EntityInterface\id(), and Row\setSourceProperty().
http://drupal8docs.diasporan.net/db/d3f/group__migration.html
2017-04-23T05:35:01
CC-MAIN-2017-17
1492917118477.15
[]
drupal8docs.diasporan.net
GitPython Tutorial¶ GitPython provides object model access to your git repository. This tutorial is composed of multiple sections, most of which explains a real-life usecase. All code presented here originated from test_docs.py to assure correctness. Knowing this should also allow you to more easily run the code for your own testing purposes, all you need is a developer installation of git-python. Meet the Repo type¶ The first step is to create a git.Repo object to represent your repository. from git import Repo join = osp.join # rorepo is a Repo instance pointing to the git-python repository. # For all you know, the first argument to Repo is a path to the repository # you want to work with repo = Repo(self.rorepo.working_tree_dir) assert not repo.bare In the above example, the directory self.rorepo.working_tree_dir equals /Users/mtrier/Development/git-python and is my working repository which contains the .git directory. You can also initialize GitPython with a bare repository. bare_repo = Repo.init(join(rw_dir, 'bare-repo'), bare=True) assert bare_repo.bare A repo object provides high-level access to your data, it allows you to create and delete heads, tags and remotes and access the configuration of the repository. repo.config_reader() # get a config reader for read-only access with repo.config_writer(): # get a config writer to change configuration pass # call release() to be sure changes are written and locks are released Query the active branch, query untracked files or whether the repository data has been modified. assert not bare_repo.is_dirty() # check the dirty state repo.untracked_files # retrieve a list of untracked files # ['my_untracked_file'] Clone from existing repositories or initialize new empty ones. cloned_repo = repo.clone(join(rw_dir, 'to/this/path')) assert cloned_repo.__class__ is Repo # clone an existing repository assert Repo.init(join(rw_dir, 'path/for/new/repo')).__class__ is Repo Archive the repository contents to a tar file. with open(join(rw_dir, 'repo.tar'), 'wb') as fp: repo.archive(fp) Advanced Repo Usage¶ And of course, there is much more you can do with this type, most of the following will be explained in greater detail in specific tutorials. Don’t worry if you don’t understand some of these examples right away, as they may require a thorough understanding of gits inner workings. Query relevant repository paths ... assert osp.isdir(cloned_repo.working_tree_dir) # directory with your work files assert cloned_repo.git_dir.startswith(cloned_repo.working_tree_dir) # directory containing the git repository assert bare_repo.working_tree_dir is None # bare repositories have no working tree Heads Heads are branches in git-speak. References are pointers to a specific commit or to other references. Heads and Tags are a kind of references. GitPython allows you to query them rather intuitively. self.assertEqual(repo.head.ref, repo.heads.master, # head is a sym-ref pointing to master "It's ok if TC not running from `master`.") self.assertEqual(repo.tags['0.3.5'], repo.tag('refs/tags/0.3.5')) # you can access tags in various ways too self.assertEqual(repo.refs.master, repo.heads['master']) # .refs provides all refs, ie heads ... if 'TRAVIS' not in os.environ: self.assertEqual(repo.refs['origin/master'], repo.remotes.origin.refs.master) # ... remotes ... self.assertEqual(repo.refs['0.3.5'], repo.tags['0.3.5']) # ... and tags You can also create new heads ... new_branch = cloned_repo.create_head('feature') # create a new branch ... assert cloned_repo.active_branch != new_branch # which wasn't checked out yet ... self.assertEqual(new_branch.commit, cloned_repo.active_branch.commit) # pointing to the checked-out commit # It's easy to let a branch point to the previous commit, without affecting anything else # Each reference provides access to the git object it points to, usually commits assert new_branch.set_commit('HEAD~1').commit == cloned_repo.active_branch.commit.parents[0] ... and tags ... past = cloned_repo.create_tag('past', ref=new_branch, message="This is a tag-object pointing to %s" % new_branch.name) self.assertEqual(past.commit, new_branch.commit) # the tag points to the specified commit assert past.tag.message.startswith("This is") # and its object carries the message provided now = cloned_repo.create_tag('now') # This is a tag-reference. It may not carry meta-data assert now.tag is None You can traverse down to git objects through references and other objects. Some objects like commits have additional meta-data to query. assert now.commit.message != past.commit.message # You can read objects directly through binary streams, no working tree required assert (now.commit.tree / 'VERSION').data_stream.read().decode('ascii').startswith('2') # You can traverse trees as well to handle all contained files of a particular commit file_count = 0 tree_count = 0 tree = past.commit.tree for item in tree.traverse(): file_count += item.type == 'blob' tree_count += item.type == 'tree' assert file_count and tree_count # we have accumulated all directories and files self.assertEqual(len(tree.blobs) + len(tree.trees), len(tree)) # a tree is iterable on its children Remotes allow to handle fetch, pull and push operations, while providing optional real-time progress information to progress delegates._tree_dir) assert origin.exists() for fetch_info in origin.fetch(progress=MyProgressPrinter()): print("Updated %s to %s" % (fetch_info.ref, fetch_info.commit)) # create a local branch at the latest fetched master. We specify the name statically, but you have all # information to do it programatically as well. bare_master = bare_repo.create_head('master', origin.refs.master) bare_repo.head.set_reference(bare_master) assert not bare_repo.delete_remote(origin).exists() # push and pull behave very similarly The index is also called stage in git-speak. It is used to prepare new commits, and can be used to keep results of merge operations. Our index implementation allows to stream date into the index, which is useful for bare repositories that do not have a working tree. self.assertEqual(new_branch.checkout(), cloned_repo.active_branch) # checking out branch adjusts the wtree self.assertEqual(new_branch.commit, past.commit) # Now the past is checked out new_file_path = osp.join(cloned_repo.working_tree_dir, 'my-new-file') open(new_file_path, 'wb').close() # create new file in working tree cloned_repo.index.add([new_file_path]) # add it to the index # Commit the changes to deviate masters history cloned_repo.index.commit("Added a new file in the past - for later merege") # prepare a merge master = cloned_repo.heads.master # right-hand side is ahead of us, in the future merge_base = cloned_repo.merge_base(new_branch, master) # allwos for a three-way merge cloned_repo.index.merge_tree(master, base=merge_base) # write the merge result into index cloned_repo.index.commit("Merged past and now into future ;)", parent_commits=(new_branch.commit, master.commit)) # now new_branch is ahead of master, which probably should be checked out and reset softly. # note that all these operations didn't touch the working tree, as we managed it ourselves. # This definitely requires you to know what you are doing :) ! assert osp.basename(new_file_path) in new_branch.commit.tree # new file is now in tree master.commit = new_branch.commit # let master point to most recent commit cloned_repo.head.reference = master # we adjusted just the reference, not the working tree or index Submodules represent all aspects of git submodules, which allows you query all of their related information, and manipulate in various ways. # create a new submodule and check it out on the spot, setup to track master branch of `bare_repo` # As our GitPython repository has submodules already that point to github, make sure we don't # interact with them for sm in cloned_repo.submodules: assert not sm.remove().exists() # after removal, the sm doesn't exist anymore sm = cloned_repo.create_submodule('mysubrepo', 'path/to/subrepo', url=bare_repo.git_dir, branch='master') # .gitmodules was written and added to the index, which is now being committed cloned_repo.index.commit("Added submodule") assert sm.exists() and sm.module_exists() # this submodule is defintely available sm.remove(module=True, configuration=False) # remove the working tree assert sm.exists() and not sm.module_exists() # the submodule itself is still available # update all submodules, non-recursively to save time, this method is very powerful, go have a look cloned_repo.submodule_update(recursive=False) assert sm.module_exists() # The submodules working tree was checked out by update Examining References¶ References are the tips of your commit graph from which you can easily examine the history of your project. import git repo = git.Repo.clone_from(self._small_repo_url(), osp.join(rw_dir, 'repo'), branch='master') heads = repo.heads master = heads.master # lists can be accessed by name for convenience master.commit # the commit pointed to by head called master master.rename('new_name') # rename heads master.rename('master') Tags are (usually immutable) references to a commit and/or a tag object. tags = repo.tags tagref = tags[0] tagref.tag # tags may have tag objects carrying additional information tagref.commit # but they always point to commits repo.delete_tag(tagref) # delete or repo.create_tag("my_tag") # create tags using the repo for convenience A symbolic reference is a special case of a reference as it points to another reference instead of a commit. head = repo.head # the head points to the active branch/ref master = head.reference # retrieve the reference the head points to master.commit # from here you use it as any other reference Access the reflog easily. log = master.log() log[0] # first (i.e. oldest) reflog entry log[-1] # last (i.e. most recent) reflog entry Modifying References¶ You can easily create and delete reference types or modify where they point to. new_branch = repo.create_head('new') # create a new one new_branch.commit = 'HEAD~10' # set branch to another commit without changing index or working trees repo.delete_head(new_branch) # delete an existing head - only works if it is not checked out Create or delete tags the same way except you may not change them afterwards. new_tag = repo.create_tag('my_new_tag', message='my message') # You cannot change the commit a tag points to. Tags need to be re-created self.failUnlessRaises(AttributeError, setattr, new_tag, 'commit', repo.commit('HEAD~1')) repo.delete_tag(new_tag) Change the symbolic reference to switch branches cheaply (without adjusting the index or the working tree). new_branch = repo.create_head('another-branch') repo.head.reference = new_branch Understanding Objects¶ In GitPython, all objects can be accessed through their common base, can be compared and hashed. They are usually not instantiated directly, but through references or specialized repository functions. hc = repo.head.commit hct = hc.tree hc != hct # @NoEffect hc != repo.tags[0] # @NoEffect hc == repo.head.reference.commit # @NoEffect Common fields are ... self.assertEqual(hct.type, 'tree') # preset string type, being a class attribute assert hct.size > 0 # size in bytes assert len(hct.hexsha) == 40 assert len(hct.binsha) == 20 Index objects are objects that can be put into git’s index. These objects are trees, blobs and submodules which additionally know about their path in the file system as well as their mode. self.assertEqual(hct.path, '') # root tree has no path assert hct.trees[0].path != '' # the first contained item has one though self.assertEqual(hct.mode, 0o40000) # trees have the mode of a linux directory self.assertEqual(hct.blobs[0].mode, 0o100644) # blobs have specific mode, comparable to a standard linux fs Access blob data (or any object data) using streams. hct.blobs[0].data_stream.read() # stream object to read data from hct.blobs[0].stream_data(open(osp.join(rw_dir, 'blob_data'), 'wb')) # write data to given stream The Commit object¶ Commit objects contain information about a specific commit. Obtain commits using references as done in Examining References or as follows. Obtain commits at the specified revision repo.commit('master') repo.commit('v0.8.1') repo.commit('HEAD~10') Iterate 50 commits, and if you need paging, you can specify a number of commits to skip. fifty_first_commits = list(repo.iter_commits('master', max_count=50)) assert len(fifty_first_commits) == 50 # this will return commits 21-30 from the commit list as traversed backwards master ten_commits_past_twenty = list(repo.iter_commits('master', max_count=10, skip=20)) assert len(ten_commits_past_twenty) == 10 assert fifty_first_commits[20:30] == ten_commits_past_twenty A commit object carries all sorts of meta-data headcommit = repo.head.commit assert len(headcommit.hexsha) == 40 assert len(headcommit.parents) > 0 assert headcommit.tree.type == 'tree' assert headcommit.author.name == 'Sebastian Thiel' assert isinstance(headcommit.authored_date, int) assert headcommit.committer.name == 'Sebastian Thiel' assert isinstance(headcommit.committed_date, int) assert headcommit.message != '' Note: date time is represented in a seconds since epoch format. Conversion to human readable form can be accomplished with the various time module methods. import time time.asctime(time.gmtime(headcommit.committed_date)) time.strftime("%a, %d %b %Y %H:%M", time.gmtime(headcommit.committed_date)) You can traverse a commit’s ancestry by chaining calls to parents assert headcommit.parents[0].parents[0].parents[0] == repo.commit('master^^^') The above corresponds to master^^^ or master~3 in git parlance. The Tree object¶ A tree records pointers to the contents of a directory. Let’s say you want the root tree of the latest commit on the master branch tree = repo.heads.master.commit.tree assert len(tree.hexsha) == 40 Once you have a tree, you can get its contents assert len(tree.trees) > 0 # trees are subdirectories assert len(tree.blobs) > 0 # blobs are files assert len(tree.blobs) + len(tree.trees) == len(tree) It is useful to know that a tree behaves like a list with the ability to query entries by name self.assertEqual(tree['smmap'], tree / 'smmap') # access by index and by sub-path for entry in tree: # intuitive iteration of tree members print(entry) blob = tree.trees[0].blobs[0] # let's get a blob in a sub-tree assert blob.name assert len(blob.path) < len(blob.abspath) self.assertEqual(tree.trees[0].name + '/' + blob.name, blob.path) # this is how relative blob path generated self.assertEqual(tree[blob.path], blob) # you can use paths like 'dir/file' in tree There is a convenience method that allows you to get a named sub-object from a tree with a syntax similar to how paths are written in a posix system assert tree / 'smmap' == tree['smmap'] assert tree / blob.path == tree[blob.path] You can also get a commit’s root tree directly from the repository # This example shows the various types of allowed ref-specs assert repo.tree() == repo.head.commit.tree past = repo.commit('HEAD~5') assert repo.tree(past) == repo.tree(past.hexsha) self.assertEqual(repo.tree('v0.8.1').type, 'tree') # yes, you can provide any refspec - works everywhere As trees allow direct access to their intermediate child entries only, use the traverse method to obtain an iterator to retrieve entries recursively assert len(tree) < len(list(tree.traverse())) Note If trees return Submodule objects, they will assume that they exist at the current head’s commit. The tree it originated from may be rooted at another commit though, that it doesn’t know. That is why the caller would have to set the submodule’s owning or parent commit using the set_parent_commit(my_commit) method. The Index Object¶ The git index is the stage containing changes to be written with the next commit or where merges finally have to take place. You may freely access and manipulate this information using the IndexFile object. Modify the index with ease index = repo.index # The index contains all blobs in a flat list assert len(list(index.iter_blobs())) == len([o for o in repo.head.commit.tree.traverse() if o.type == 'blob']) # Access blob objects for (path, stage), entry in index.entries.items(): # @UnusedVariable pass new_file_path = osp.join(repo.working_tree_dir, 'new-file-name') open(new_file_path, 'w').close() index.add([new_file_path]) # add a new file to the index index.remove(['LICENSE']) # remove an existing one assert osp.isfile(osp.join(repo.working_tree_dir, 'LICENSE')) # working tree is untouched self.assertEqual(index.commit("my commit message").type, 'commit') # commit changed index repo.active_branch.commit = repo.commit('HEAD~1') # forget last commit from git import Actor author = Actor("An author", "[email protected]") committer = Actor("A committer", "[email protected]") # commit by commit message and author and committer index.commit("my commit message", author=author, committer=committer) Create new indices from other trees or as result of a merge. Write that result to a new index file for later inspection. from git import IndexFile # loads a tree into a temporary index, which exists just in memory IndexFile.from_tree(repo, 'HEAD~1') # merge two trees three-way into memory merge_index = IndexFile.from_tree(repo, 'HEAD~10', 'HEAD', repo.merge_base('HEAD~10', 'HEAD')) # and persist it merge_index.write(osp.join(rw_dir, 'merged_index')) Handling Remotes¶ Remotes are used as alias for a foreign repository to ease pushing to and fetching from them empty_repo = git.Repo.init(osp.join(rw_dir, 'empty')) origin = empty_repo.create_remote('origin', repo.remotes.origin.url) assert origin.exists() assert origin == empty_repo.remotes.origin == empty_repo.remotes['origin'] origin.fetch() # assure we actually have data. fetch() returns useful information # Setup a local tracking branch of a remote branch empty_repo.create_head('master', origin.refs.master) # create local branch "master" from remote "master" empty_repo.heads.master.set_tracking_branch(origin.refs.master) # set local "master" to track remote "master empty_repo.heads.master.checkout() # checkout local "master" to working tree # Three above commands in one: empty_repo.create_head('master', origin.refs.master).set_tracking_branch(origin.refs.master).checkout() # rename remotes origin.rename('new_origin') # push and pull behaves similarly to `git push|pull` origin.pull() origin.push() # assert not empty_repo.delete_remote(origin).exists() # create and delete remotes You can easily access configuration information for a remote by accessing options as if they where attributes. The modification of remote configuration is more explicit though. assert origin.url == repo.remotes.origin.url with origin.config_writer as cw: cw.set("pushurl", "other_url") # Please note that in python 2, writing origin.config_writer.set(...) is totally safe. # In py3 __del__ calls can be delayed, thus not writing changes in time. You can also specify per-call custom environments using a new context manager on the Git command, e.g. for using a specific SSH key. The following example works with git starting at v2.3: ssh_cmd = 'ssh -i id_deployment_key' with repo.git.custom_environment(GIT_SSH_COMMAND=ssh_cmd): repo.remotes.origin.fetch() This one sets a custom script to be executed in place of ssh, and can be used in git prior to v2.3: ssh_executable = os.path.join(rw_dir, 'my_ssh_executable.sh') with repo.git.custom_environment(GIT_SSH=ssh_executable): repo.remotes.origin.fetch() Here’s an example executable that can be used in place of the ssh_executable above: #!/bin/sh ID_RSA=/var/lib/openshift/5562b947ecdd5ce939000038/app-deployments/id_rsa exec /usr/bin/ssh -o StrictHostKeyChecking=no -i $ID_RSA "$@" Please note that the script must be executable (i.e. chomd +x script.sh). StrictHostKeyChecking=no is used to avoid prompts asking to save the hosts key to ~/.ssh/known_hosts, which happens in case you run this as daemon. You might also have a look at Git.update_environment(...) in case you want to setup a changed environment more permanently. Submodule Handling¶ Submodules can be conveniently handled using the methods provided by GitPython, and as an added benefit, GitPython provides functionality which behave smarter and less error prone than its original c-git implementation, that is GitPython tries hard to keep your repository consistent when updating submodules recursively or adjusting the existing configuration. repo = self.rorepo sms = repo.submodules assert len(sms) == 1 sm = sms[0] self.assertEqual(sm.name, 'gitdb') # git-python has gitdb as single submodule ... self.assertEqual(sm.children()[0].name, 'smmap') # ... which has smmap as single submodule # The module is the repository referenced by the submodule assert sm.module_exists() # the module is available, which doesn't have to be the case. assert sm.module().working_tree_dir.endswith('gitdb') # the submodule's absolute path is the module's path assert sm.abspath == sm.module().working_tree_dir self.assertEqual(len(sm.hexsha), 40) # Its sha defines the commit to checkout assert sm.exists() # yes, this submodule is valid and exists # read its configuration conveniently assert sm.config_reader().get_value('path') == sm.path self.assertEqual(len(sm.children()), 1) # query the submodule hierarchy In addition to the query functionality, you can move the submodule’s repository to a different path < move(...)>, write its configuration < config_writer().set_value(...).release()>, update its working tree < update(...)>, and remove or add them < remove(...), add(...)>. If you obtained your submodule object by traversing a tree object which is not rooted at the head’s commit, you have to inform the submodule about its actual commit to retrieve the data from by using the set_parent_commit(...) method. The special RootModule type allows you to treat your master repository as root of a hierarchy of submodules, which allows very convenient submodule handling. Its update(...) method is reimplemented to provide an advanced way of updating submodules as they change their values over time. The update method will track changes and make sure your working tree and submodule checkouts stay consistent, which is very useful in case submodules get deleted or added to name just two of the handled cases. Additionally, GitPython adds functionality to track a specific branch, instead of just a commit. Supported by customized update methods, you are able to automatically update submodules to the latest revision available in the remote repository, as well as to keep track of changes and movements of these submodules. To use it, set the name of the branch you want to track to the submodule.$name.branch option of the .gitmodules file, and use GitPython update methods on the resulting repository with the to_latest_revision parameter turned on. In the latter case, the sha of your submodule will be ignored, instead a local tracking branch will be updated to the respective remote branch automatically, provided there are no local changes. The resulting behaviour is much like the one of svn::externals, which can be useful in times. Obtaining Diff Information¶ Diffs can generally be obtained by subclasses of Diffable as they provide the diff method. This operation yields a DiffIndex allowing you to easily access diff information about paths. Diffs can be made between the Index and Trees, Index and the working tree, trees and trees as well as trees and the working copy. If commits are involved, their tree will be used implicitly. hcommit = repo.head.commit hcommit.diff() # diff tree against index hcommit.diff('HEAD~1') # diff tree against previous tree hcommit.diff(None) # diff tree against working tree index = repo.index index.diff() # diff index against itself yielding empty diff index.diff(None) # diff index against working copy index.diff('HEAD') # diff index against current HEAD tree The item returned is a DiffIndex which is essentially a list of Diff objects. It provides additional filtering to ease finding what you might be looking for. # Traverse added Diff objects only for diff_added in hcommit.diff('HEAD~1').iter_change_type('A'): print(diff_added) Use the diff framework if you want to implement git-status like functionality. - A diff between the index and the commit’s tree your HEAD points to - use repo.index.diff(repo.head.commit) - A diff between the index and the working tree - use repo.index.diff(None) - A list of untracked files - use repo.untracked_files Switching Branches¶ To switch between branches similar to git checkout, you effectively need to point your HEAD symbolic reference to the new branch and reset your index and working copy to match. A simple manual way to do it is the following one # Reset our working tree 10 commits into the past past_branch = repo.create_head('past_branch', 'HEAD~10') repo.head.reference = past_branch assert not repo.head.is_detached # reset the index and working tree to match the pointed-to commit repo.head.reset(index=True, working_tree=True) # To detach your head, you have to point to a commit directy repo.head.reference = repo.commit('HEAD~5') assert repo.head.is_detached # now our head points 15 commits into the past, whereas the working tree # and index are 10 commits in the past The previous approach would brutally overwrite the user’s changes in the working copy and index though and is less sophisticated than a git-checkout. The latter will generally prevent you from destroying your work. Use the safer approach as follows. # checkout the branch using git-checkout. It will fail as the working tree appears dirty self.failUnlessRaises(git.GitCommandError, repo.heads.master.checkout) repo.heads.past_branch.checkout() Initializing a repository¶ In this example, we will initialize an empty repository, add an empty file to the index, and commit the change. import git repo_dir = osp.join(rw_dir, 'my-new-repo') file_name = osp.join(repo_dir, 'new-file') r = git.Repo.init(repo_dir) # This function just creates an empty file ... open(file_name, 'wb').close() r.index.add([file_name]) r.index.commit("initial commit") Please have a look at the individual methods as they usually support a vast amount of arguments to customize their behavior. Using git directly¶ In case you are missing functionality as it has not been wrapped, you may conveniently use the git command directly. It is owned by each repository instance. git = repo.git git.checkout('HEAD', b="my_new_branch") # create a new branch git.branch('another-new-one') git.branch('-D', 'another-new-one') # pass strings for full control over argument order git.for_each_ref() # '-' becomes '_' when calling it The return value will by default be a string of the standard output channel produced by the command. Keyword arguments translate to short and long keyword arguments on the command-line. The special notion git.command(flag=True) will create a flag without value like command --flag. If None is found in the arguments, it will be dropped silently. Lists and tuples passed as arguments will be unpacked recursively to individual arguments. Objects are converted to strings using the str(...) function. Object Databases¶ git.Repo instances are powered by its object database instance which will be used when extracting any data, or when writing new objects. The type of the database determines certain performance characteristics, such as the quantity of objects that can be read per second, the resource usage when reading large data files, as well as the average memory footprint of your application. GitDB¶ The GitDB is a pure-python implementation of the git object database. It is the default database to use in GitPython 0.3. Its uses less memory when handling huge files, but will be 2 to 5 times slower when extracting large quantities small of objects from densely packed repositories: repo = Repo("path/to/repo", odbt=GitDB) GitCmdObjectDB¶ The git command database uses persistent git-cat-file instances to read repository information. These operate very fast under all conditions, but will consume additional memory for the process itself. When extracting large files, memory usage will be much higher than the one of the GitDB: repo = Repo("path/to/repo", odbt=GitCmdObjectDB) Git Command Debugging and Customization¶ Using environment variables, you can further adjust the behaviour of the git command. - GIT_PYTHON_TRACE - If set to non-0, all executed git commands will be shown as they happen - If set to full, the executed git command _and_ its entire output on stdout and stderr will be shown as they happen NOTE: All logging is outputted using a Python logger, so make sure your program is configured to show INFO-level messages. If this is not the case, try adding the following to your program:import logging logging.basicConfig(level=logging.INFO) - GIT_PYTHON_GIT_EXECUTABLE - If set, it should contain the full path to the git executable, e.g. c:\Program Files (x86)\Git\bin\git.exe on windows or /usr/bin/git on linux.
http://gitpython.readthedocs.io/en/stable/tutorial.html
2017-04-23T05:29:12
CC-MAIN-2017-17
1492917118477.15
[]
gitpython.readthedocs.io
: db.system.js.save( { _id: "echoFunction", value : function(x) { return x; } } ) db.system.js.save( { _id : "myAddFunction" , value : function (x, y){ return x + y; } } ); -().);
https://docs.mongodb.com/manual/tutorial/store-javascript-function-on-server/
2017-04-23T05:32:17
CC-MAIN-2017-17
1492917118477.15
[]
docs.mongodb.com
docker rm [OPTIONS] CONTAINER [CONTAINER...] Options Parent command Examples Remove a container This will remove the container referenced under the link /redis. $ docker rm /redis /redis Remove a link specified with --link on the default bridge network This will remove the underlying link between /webapp and the /redis containers on the default bridge network, removing all network communication between the two containers. This does not apply when --link is used with user-specified networks. $ docker rm --link /webapp/redis /webapp/redis Force-remove a running container This command will force-remove a running container. $ docker rm --force redis redis The main process inside the container referenced under the link redis will receive SIGKILL, then the container will be removed. Remove all stopped containers $ docker rm $(docker ps -a -q) This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted. Remove a container and its volumes $ docker rm -v redis redis This command will remove the container and any volumes associated with it. Note that if a volume was specified with a name, it will not be removed. Remove a container and selectively remove volumes $ docker create -v awesome:/foo -v /bar --name hello redis hello $ docker rm -v hello In this example, the volume for /foo will remain intact, but the volume for /bar will be removed. The same behavior holds for volumes inherited with --volumes-from.
https://docs.docker.com/edge/engine/reference/commandline/rm/
2017-04-23T05:26:16
CC-MAIN-2017-17
1492917118477.15
[]
docs.docker.com
You customize your chart using the options in the Options... dialog box that you select from the Chart menu. Tip To reduce the amount of customizing you need to do, you should specify the defaults that you would like used for a new chart. To do this, select the Output Options command in the Query menu. The defaults you can set are identical to the options described here. To adjust the current chart, from a Chart window, select the Options command from the Chart menu on the menu bar. In the right hand corner of the dialog box is a sample of the chart style you have chosen. As you select the options you require, they are immediately reflected in the sample. Chart Title You should give your chart a meaningful title. Left Title & Bottom Title Specify titles for the left and bottom margins of the chart if you want to, by entering the text you require in the Left title and Bottom title entry boxes respectively. Chart Type You can change the chart type by selecting the required type from the drop down list. You can choose a: Tip You can also change your Chart Type by selecting the Chart Type you require from the Chart menu on the menu bar. Hot Chart Enable or disable the Hot Charting facility by selecting or deselecting this Hot Chart button. Refer to 5.5 Interactive Hot Charts for details of Hot Charting. Chart Style The styles available are dependent on the chart type you have selected. For example, if you have chosen a bar chart, you could choose a horizontal, vertical and stacked style or for a pie chart, you could choose to include colored labels or % labels and so on. Grid Style This option allows you to choose the type of grid, if any, to appear at the back of your chart. Labels This list box lets you choose whether you want labels on your chart and if so, where they are to be. Legend Column From the fields in your query, select those you want to print in the legend. You can also choose not to include a legend. Legend Style Use this option to customize the appearance of the text used in the legend, for example to use italics. Color Palette This option allows you to change the type of color used for your chart, such as from full color to pastels. Draw Style Use this option to change the drawing style of your chart, for instance from color to black and white. Y-axis Style You can change the scale and range of the Y-axis in a chart using this list box. If you choose User-defined from the list, the maximum Y-value in the chart will be the maximum value found in your selected data. Line Statistics If you have selected a Line chart type, you can add one or more statistics to it by choosing Mean, Standard, Deviation or Best Fit. Foreground From this list box you can choose the font color for all the text on your chart. Background Select the background color for your chart window. Font Use & Fonts Style You can choose the font style for each type of text in your chart. For example, you could bold your Chart Title. Click on the down arrow of the Font Use box for the list of texts you can change, then use the Font Style option to define the style of font you want to use. Print style If you have a color printer, you can use this option to print in color or monochrome. Saving your Chart Options Once you are satisfied with the options you have specified, select the OK button or press the <Enter> key. The current Chart window will immediately reflect your changes. If you want to discard the options you have specified, select the Cancel button or press the <Esc> key. To return to the prior settings, select the Reset button.
https://docs.lansa.com/14/en/lansa037/content/lansa/jmp_chart_options_dlg.htm
2018-10-15T13:36:31
CC-MAIN-2018-43
1539583509196.33
[]
docs.lansa.com
Web and Windows applications from a single application model The Framework can provide you with a single and consistent application model for both Windows and Web. Standard interface A design loosely based on Microsoft Outlook. Outlook is very popular around the world and almost all users are familiar with it, whether at work or at home. This model provides a cockpit or dashboard style design where everything that an end-user might need to do is just a few clicks away. XML-based external design schema The Framework is instantly executable. Because of the modular design, many developers can work on different parts of the application at the same time. The versions of the prototype can be quickly emailed for evaluation and feedback. Rapid prototyping Applications, business objects and commands can be defined in a few minutes and can be used in emulation mode before any code to support them actually exists. A vision of how the completed result will look, act and feel can be formed and executed before a single line of code is written. This process also acts as a way of rapidly uncovering new or hidden business requirements. Prototype becomes the application You do not have to discard any part of your prototype. When you are ready to turn the prototype into a real application, you simply snap your custom-made parts in the Framework. This means you keep the basic structure of the application, its business objects, commands, menus and images. Rapid modernization You can use the Framework RAMP tools to quickly enable your IBM i applications for Windows. Absolutely no change to the 5250 application is required and yet RAMP offers advanced navigation, search and organization capabilities that go well beyond other modernization tools. Simple to code The Framework gives the developer much easier access to advanced Visual LANSA features. For example, it implicitly handles multi-form and multi-component interactions and referencing. Load-on-demand architecture A load-on-demand architecture that which enforces consistency. Productivity improvements in addition to Visual LANSA The Framework handles all the basic functions of the application, such as multi-form interactions and referencing. A huge "jump start" for new Visual LANSA developers The environment helps the developers in getting started with the application development and guides them towards a standard implementation. Gradual and benefits driven introduction to some of the heavier OO concepts The Framework is based on OO concepts such as inheritance. This underlying structure and its benefits become gradually more obvious to the Framework developers as they progress in implementing the application.
https://docs.lansa.com/14/en/lansa048/content/lansa/lansa048_0070.htm
2018-10-15T12:21:29
CC-MAIN-2018-43
1539583509196.33
[]
docs.lansa.com
Connect to the console session of a server using Remote Desktop for Administration Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 To connect to the console session of a server Using the Remote Desktops MMC Snap-in Using the command line Using the Remote Desktops MMC Snap-in Open Remote Desktops snap-in. If you have not already done so, create the connection to the terminal server or computer to which you want to connect. In the console tree, right-click the connection. In the context menu, click Connect. Note - To open Remote Desktops, click Start, click Control Panel, double-click Administrative Tools, and then double-click Remote Desktops. Using the command line Open Command Prompt. Type: mstsc /console. Remote Desktop Connection will start. Type the computer name or IP address of the computer you want to connect to in the Computer box. Configure any other desired options, and then click Connect. Notes To open a command prompt, click Start, point to All programs, point to Accessories, and then click Command prompt. To view the complete syntax for this command, at a command prompt, type: mstsc /? Information about functional differences - Your server might function differently based on the version and edition of the operating system that is installed, your account permissions, and your menu settings. For more information, see Viewing Help on the Web. See Also Concepts Enable or disable Remote Desktop Terminal Services commands Create a new connection with the Remote Desktops snap-in
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc775475(v=ws.10)
2018-10-15T13:31:20
CC-MAIN-2018-43
1539583509196.33
[]
docs.microsoft.com
Application stack and server architecture The application stack is divided into three separate models - Application Platform, Application Foundation, and Application Suite. Overview The application stack and server architecture aligns with three key pillars: - New client - Cloud readiness - New development stack The application stack has been divided into three separate models: Application Platform, Application Foundation, and Application Suite. The separation enables new application development on the base foundation models, just as the Fleet Management sample application has been developed. Note the following important points about the changes in the server architecture: The services endpoint on the server is now responsible for returning all form and control metadata and data to the browser-based client. There is no longer any remote procedure call (RPC)-based communication with the server. The form objects still run on the server, and rendering has been optimized for browsers and other clients through server and client-side (browser) investments. The server, including the application code base, is deployed to an Internet Information Services (IIS) web application. In the cloud, it's deployed to Microsoft Azure infrastructure as a service (IaaS) virtual machines (VMs). It is hosted on Azure and is available for access through the Internet. A user can use a combination of clients and credentials to access it. The recommended primary identity provider is OrgID, and the store for the identity is Azure Active Directory (Azure AD). The security subsystem uses the same AuthZ semantics for users and roles. Two types of clients must be considered for access in the cloud: active clients and passive clients. - Active clients can programmatically initiate actions based on responses from the server. An active client doesn't rely on HTTP redirects for authentication. A smart/rich client is an example of an active client. - Passive clients can't programmatically initiate actions based on responses from the server. A passive client relies on HTTP redirects for authentication. A web browser is an example of a passive client. Currently, Access Control Service (ACS) doesn't support a mechanism for non-interactive authentication. Therefore, even when active clients try to authenticate by using ACS, they must use passive client authentication, in which a browser dialog box prompts the user to enter his or her credentials. A completely revamped metadata subsystem incorporates the new compiler and Microsoft Visual Studio–based development model. The model store is represented as a set of folders and XML artifacts that are organized by model. The model elements, such as tables, forms, and classes, are represented by an XML file that contains both metadata and source code. The left side of the following diagram shows how the application stack has been split into distinct models. The right side shows how the key components are stacked in the server. Microsoft Dynamics AX 2012 unionizes permissions that are granted to a user. However, an issue can occur when a data source is granted read permissions through an entry point and edit permissions through a form. Because permissions are unionized, the user eventually has edit permissions to that data source in this case. However, if the form was granted read access through a menu item, the expectation is that the data source can't be edited through that path. Therefore, the context of the call isn't honored. In Microsoft Dynamics 365 for Finance and Operations, the context of the call is honored, based on the permissions that are granted through the entry point. If the form was granted read access through a menu item, the framework grants the user only read access to the table. However, if the same form is opened through another menu item that provides write access, the form is granted write permissions. This behavior simplifies the development experience, because developers can specify the desired behavior for a form through a given entry point. Cloud architecture The cloud architecture includes services that automate software deployment and provisioning, operational monitoring and reporting, and seamless application lifecycle management. The cloud architecture consists of three main conceptual areas: - Lifecycle Services (LCS) – LCS is a multi-tenant shared service that enables a wide range of lifecycle-related capabilities. Capabilities that are specific to this release include software development, customer provisioning, service level agreement (SLA) monitoring, and reporting capabilities. - Finance and Operations – The VM instances are deployed through LCS to your Azure subscription. Various topologies are available: demo, development/test, and high-availability production topologies. - Shared Microsoft services – Finance and Operations uses several Microsoft services to enable a “One Microsoft” solution where customers can manage a single sign-in, subscription management, and billing relationship with Microsoft across Finance and Operations, Microsoft Office 365, and other online services. Many features of the Azure platform are used, such as Microsoft Azure Storage, networking, monitoring, and SQL Azure, to name a few. Shared services put into operation and orchestrate the application lifecycle of the environments for participants. Together, Azure functionality and LCS will offer a robust cloud service. Development environment The architecture of the development environment resembles the architecture of the cloud instance. It also includes the software development kit (SDK), which consists of the Visual Studio development tools and other components. Source control through Team Foundation Server or Visual Studio Online enables multiple-developer scenarios, where each developer uses a separate development environment. Deployment packages can be compiled and generated on a development environment and deployed to cloud instances by using LCS. The following diagram shows how the key components interact in a development environment.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/dev-tools/application-stack-server-architecture
2018-10-15T12:42:54
CC-MAIN-2018-43
1539583509196.33
[]
docs.microsoft.com
Contents Now Platform Capabilities Previous Topic Next Topic Define a non-allowed operational transition ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Define a non-allowed operational transition Define a restriction for CI Lifecycle Management in which a specified CI cannot transition from one operational state to another. About this task By default, CI Lifecycle Management has no restrictions for transitioning CIs from one operational state to another. You can restrict this behaviour by defining transitions that are not allowed for a specified CI. For example, you can define a restriction on transitioning a Linux server from non-operational state to repair in progress state. Procedure Navigate to Configuration > CI Lifecycle Management > Not Allowed Operational Transitions. On the Not Allowed Operational Transitions page, click New and fill out the form. Field Description CI Type The CI type for which the restriction applies. Not Allowed Transition The CI state into which transitioning is restricted. Operational State The operational state that the CI must be in for the restriction to apply. ResultIf an API attempts to transition a CI that is in the specified operational state to a state that is not allowed, the operation fails and an error is logged. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/product/configuration-management/task/not-allowed-oprtionl-trnston.html
2018-10-15T13:22:03
CC-MAIN-2018-43
1539583509196.33
[]
docs.servicenow.com
Globus offers Authentication and Authorization services through an OAuth2 service, Globus Auth. Globus Auth acts as an Authorization Server, and allows users to authenticate with, and link together, identities from a wide range of Identity Providers. Although the AuthClient class documentation covers normal interactions with Globus Auth, the OAuth2 flows are significantly more complex. This section documents the supported types of authentication and how to carry them out, as well as providing some necessary background on various OAuth2 elements. Credentials are for Users and also for Applications It is very important that our goal in OAuth2 is not to get credentials for an application on its own, but rather for the application as a client to Globus which is acting on behalf of a user. Therefore, if you are writing an application called foo, and a user [email protected] is using foo, the credentials produced belong to the combination of foo and [email protected]. The resulting credentials represent the rights and permission for foo to perform actions for [email protected] on systems authenticated via Globus. OAuth2 Documentation
https://globus-sdk-python.readthedocs.io/en/stable/oauth/
2018-01-16T11:05:03
CC-MAIN-2018-05
1516084886416.17
[]
globus-sdk-python.readthedocs.io
Configuring the iSCSI Target using Ansible (12.2.x) cluster or newer - RHEL/CentOS 7.4; or Linux kernel v4.14 or newer - The ceph-iscsi-configpackage installed on all the iSCSI gateway nodes Installing: On the Ansible installer node, which could be either the administration node or a dedicated deployment node, perform the following steps: As root, install the ceph-ansiblepackage: # yum install ceph-ansible Add an entry in /etc/ansible/hostsfile for the gateway group: [ceph-iscsi-gw] ceph-igw-1 ceph-igw-2 Note If co-locating the iSCSI gateway with an OSD node, then add the OSD node to the [ceph-iscsi-gw] section. Configuring: The ceph-ansible package places a file in the /usr/share/ceph-ansible/group_vars/ directory called ceph-iscsi-gw.sample. Create a copy of this sample file named ceph-iscsi-gw.yml. Review the following Ansible variables and descriptions, and update accordingly. Note When using the gateway_iqn variable, and for Red Hat Enterprise Linux clients, installing the iscsi-initiator-utils package is required for retrieving the gateway’s IQN name. The iSCSI initiator name is located in the /etc/iscsi/initiatorname.iscsi file. Deploying: On the Ansible installer node, perform the following steps. As root, execute the Ansible playbook: # cd /usr/share/ceph-ansible # ansible-playbook ceph-iscsi-gw.yml Note The Ansible playbook will handle RPM dependencies, RBD creation and Linux IO configuration. Verify the configuration from an iSCSI gateway node: # gwcli ls Note For more information on using the gwclicommand to install and configure a Ceph iSCSI gateaway, see the Configuring the iSCSI Target using the Command Line Interface section. Important Attempting to use the targetclitool to change the configuration will result in the following issues, such as ALUA misconfiguration and path failover problems. There is the potential to corrupt data, to have mismatched configuration across iSCSI gateways, and to have mismatched WWN information, which will lead to client multipath problems. Service Management: The ceph-iscsi-config package installs the configuration management logic and a Systemd service called rbd-target-gw. When the Systemd service is enabled, the rbd-target-gw will start at boot time and will restore the Linux IO state. The Ansible playbook disables the target service during the deployment. Below are the outcomes of when LIO/ceph-iscsi-gw file there are a number of operational workflows that the Ansible playbook supports. Warning Before removing RBD images from the iSCSI gateway configuration, follow the standard procedures for removing a storage device from the operating system. Once a change has been made, rerun the Ansible playbook to apply the change across the iSCSI gateway nodes. # ansible-playbook ceph-iscsi-gw.yml Removing the Configuration: The ceph-ansible package provides an Ansible playbook to remove the iSCSI gateway configuration and related RBD images. The Ansible playbook is /usr/share/ceph-ansible/purge_gateways.yml. When this Ansible playbook is ran a prompted for the type of purge to perform: lio : In this mode the LIO configuration is purged on all iSCSI gateways that are defined. Disks that were created are left untouched within the Ceph storage cluster. all : When all is chosen, the LIO configuration is removed together with all RBD images that were defined within the iSCSI gateway environment, other unrelated RBD images will not be removed. Ensure the correct mode is chosen, this operation will delete data. Warning A purge operation is destructive action against your iSCSI gateway environment. Warning A purge operation will fail, if RBD images have snapshots or clones and are exported through the Ceph iSCSI gateway. [root@rh7-iscsi-client ceph-ansible]# ansible-playbook purge_gateways.yml Which configuration elements should be purged? (all, lio or abort) [abort]: all PLAY [Confirm removal of the iSCSI gateway configuration] ********************* GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Exit playbook if user aborted the purge] ******************************* skipping: [localhost] TASK: [set_fact ] ************************************************************* ok: [localhost] PLAY [Removing the gateway configuration] ************************************* GATHERING FACTS *************************************************************** ok: [ceph-igw-1] ok: [ceph-igw-2] TASK: [igw_purge | purging the gateway configuration] ************************* changed: [ceph-igw-1] changed: [ceph-igw-2] TASK: [igw_purge | deleting configured rbd devices] *************************** changed: [ceph-igw-1] changed: [ceph-igw-2] PLAY RECAP ******************************************************************** ceph-igw-1 : ok=3 changed=2 unreachable=0 failed=0 ceph-igw-2 : ok=3 changed=2 unreachable=0 failed=0 localhost : ok=2 changed=0 unreachable=0 failed=0
http://docs.ceph.com/docs/master/rbd/iscsi-target-ansible/
2018-01-16T11:17:37
CC-MAIN-2018-05
1516084886416.17
[]
docs.ceph.com
These templates are available in pdf format and are print ready. This template is a unique dragon drawing and can also be printed easily for use. More info Latest Wallpaper Latest Games Psionics Survey Results Last month, we asked you to rate the mystic character class and the rules for psionics released alongside it. Chinese Dragon Draw Template | Download this unique Chinese dragon drawing and use it for tattoo ideas and posters. Похожие записи:
http://wp-docs.ru/2016/10/10/14807-%D1%81%D0%BA%D0%B0%D1%87%D0%B0%D1%82%D1%8C-%D1%88%D0%B0%D0%B1%D0%BB%D0%BE%D0%BD-dragin
2018-01-16T11:34:19
CC-MAIN-2018-05
1516084886416.17
[array(['https://ae01.alicdn.com/kf/HTB1GVuKKpXXXXc5XXXXq6xXFXXXG/1pc-font-b-Tattoo-b-font-font-b-Templates-b-font-hands-feet-henna-font-b.jpg', None], dtype=object) array(['https://ae01.alicdn.com/kf/HTB1aTkMJFXXXXaXXpXXq6xXFXXXq/HF-060-%D1%82%D0%B8%D0%B3%D1%80-%D0%B4%D1%80%D0%B0%D0%BA%D0%BE%D0%BD-%D1%80%D0%B8%D1%81%D1%83%D0%BD%D0%BE%D0%BA-%D0%A0%D0%90%D0%97%D0%9C%D0%95%D0%A0-225-%D0%9C%D0%9C-%D1%85-160-%D0%9C%D0%9C-Brand-New-Body-Art-%D1%82%D0%B0%D1%82%D1%83%D0%B8%D1%80%D0%BE%D0%B2%D0%BA%D0%B0.jpg', None], dtype=object) ]
wp-docs.ru
Manage a team Agile Development allows you to manage team resources easily. Users with the scrum_master and scrum_product_owner roles can create teams, add members and groups, and estimate the effort each team member can contribute, measured in points, for each sprint period. Use burn down charts and velocity charts to help track the progress and effort of a team working on a release. The team assigned to a release is considered the default team for that release. Team members are automatically assigned to associated sprints unless a different team is assigned directly to that sprint.
https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/sdlc-scrum/concept/c_ManagingTeams.html
2018-01-16T11:29:48
CC-MAIN-2018-05
1516084886416.17
[]
docs.servicenow.com
27. Best Practices¶ The best practices mentioned here that affect database design generally refer to best practices when working with Doctrine and do not necessarily reflect best practices for database design in general. 27.1. Constrain relationships as much as possible¶ 27.2. Avoid composite keys¶ Even though Doctrine fully supports composite keys it is best not to use them if possible. Composite keys require additional work by Doctrine and thus have a higher probability of errors. 27.3. Use events judiciously¶ The event system of Doctrine is great and fast. Even though making heavy use of events, especially lifecycle events, can have a negative impact on the performance of your application. Thus you should use events judiciously. 27.4. Use cascades judiciously¶ Automatic cascades of the persist/remove/refresh/etc. operations are very handy but should be used wisely. Do NOT simply add all cascades to all associations. Think about which cascades actually do make sense for you for a particular association, given the scenarios it is most likely used in. 27.5. Don’t use special characters¶ Avoid using any non-ASCII characters in class, field, table or column names. Doctrine itself is not unicode-safe in many places and will not be until PHP itself is fully unicode-aware. 27.6. Don’t use identifier quoting¶ Identifier quoting is a workaround for using reserved words that often causes problems in edge cases. Do not use identifier quoting and avoid using reserved words as table or column names. 27.7. Initialize collections in the constructor¶ It is recommended best practice to initialize any business collections in entities in the constructor. Example: <?php namespace MyProject\Model; use Doctrine\Common\Collections\ArrayCollection; class User { private $addresses; private $articles; public function __construct() { $this->addresses = new ArrayCollection; $this->articles = new ArrayCollection; } } 27.8. Don’t map foreign keys to fields in an entity¶. 27.9. Use explicit transaction demarcation¶.
http://docs.doctrine-project.org/en/latest/reference/best-practices.html
2018-01-16T11:11:18
CC-MAIN-2018-05
1516084886416.17
[]
docs.doctrine-project.org
There are two ways to launch the Avontus Viewer desktop application: Launch Avontus Viewer by clicking the Avontus Viewer icon: 1. Launch Scaffold Designer and open the drawing you want to view in Avontus Viewer. 2. Click the Model tab in the 3D View window. The window refreshes, displaying a 3D rendering the drawing. 3. Click the Avontus Viewer icon (). Avontus Viewer launches and displays the sign In dialog. 4. Click in the Email Address text field and enter your email address. 5. Click in the Password text field and enter your Avontus Viewer password. 6. Click Sign In. Avontus Viewer signs you in.
https://docs.avontus.com/display/SVR/Signing+in+to+Avontus+Viewer+Desktop
2021-05-06T08:47:43
CC-MAIN-2021-21
1620243988753.91
[]
docs.avontus.com
Configuring service models using CMDB forms As Adobe Flash Player will no longer be supported after December 2020, you will not be able to use the Impact Model Designer (IMD) for administering service models in your environment. However, you can use the HARMAN browser to continue using IMD. Alternatively, you can administer service models and manage the configuration items (CIs) that represent your IT environment using BMC Configuration Management Database (BMC CMDB). You can alternatively achieve the following abilities using approaches mentioned in the table. Note If you are going to continue using IMD on the HARMAN browser, see Administering Service Models using IMD. Related topics Troubleshooting service model creation using CMDB Was this page helpful? Yes No Submitting... Thank you How aliases are used depends on how the BMC TrueSight Infrastructure Management Servers are deployed
https://docs.bmc.com/docs/TSInfrastructure11304/configuring-service-models-using-cmdb-forms-976803141.html
2021-05-06T09:23:52
CC-MAIN-2021-21
1620243988753.91
[]
docs.bmc.com
Patient File contains a new menu. Click on Click Here . User can chose the body part he wants. Body parts will be changed dynamically upon user’s request. Click View to show patient face chosen. Click Delete to delete patient face filled. Click Set as Permanent every area set as permanent will be set for all other faces. Double click to remove permanent. Double click on area to open injection required for specified area. The following screen appears, user can fill Botox/Filler related fields. The below screen will appear. Now enter the following information and save file. Note that 2 similar tests can be compared. Click Add the following screen appears, user can fill Invoice related fields. Click on Add Line a new item row is added as shown below. Click on Generate Invoice, Invoice will be added to the invoices list. For More Info About Invoice Click Here The following screen appears, user can fill Payment related fields. For More Info About Payment Click Here The following screen containing Group Name appears. Enter Group Name and Click Add, user will be directed to the imaging list page. Click on image icon , the following screen appears. The images/video chosen will be added to the Select or Drop Images/Videos list as shown below. Click Add Images/Videos to add them to the group. Click Back, user will be directed to Imaging List. Click Select to Delete then choose image/video to be deleted. Click on Photos to see photos added. Click on Videos to see videos added. User can cancel patient's appointmnet and specify the cancellation reason. User can add attachments to patient. Click on select the following pop up containing files opes up. Select desired file and press open. The file is added below the doctor chosen. To undo selection simply click on remove. Click on add attachment to attach the file to the patient. Click on delete to delete added attachment. To drop file, simply drag and drop the desired file in the specified box. User can see patient's information through charts.
http://docs.imhotep-med.com/patientMenu.aspx
2021-05-06T10:08:33
CC-MAIN-2021-21
1620243988753.91
[]
docs.imhotep-med.com
Deleting Stocks When you delete the stock, all assigned web sites are assigned to the Default Stock. We recommend reassigning websites to other stocks prior to deletion. Important: Deleting a stock can affect salable quantities and unprocessed orders for a sales channel. If you continue using a sales channel, please add the sales channel to another existing or new stock. On the Admin sidebar, go to Stores > Inventory > Stocks. Select one or more stocks to delete. Browse or search and select checkboxes for stocks you want to delete. From the Actions menu, select Delete. Select Delete from the Actions menu In the confirmation dialog, click OK. The stock is deleted and any assigned sales channels are unmapped. Stock delete verification message
https://docs.magento.com/user-guide/v2.3/catalog/inventory-stock-delete.html
2021-05-06T10:30:30
CC-MAIN-2021-21
1620243988753.91
[]
docs.magento.com
The EF600 storage array can include two HICs – one external and one internal. In this configuration, the external HIC is connected to an internal, auxiliary HIC. Each physical port that you can access from the external HIC has an associated virtual port from the internal HIC. To achieve maximum 200Gb performance, you must assign parameters for both the physical and virtual ports so the host can establish connections to each. If you do not assign parameters to the virtual port, the HIC will run at approximately half its capable speed.
https://docs.netapp.com/ess-11/topic/com.netapp.doc.ssm-sam-116/GUID-05DF6655-B309-4E68-BF01-C8840DE46830.html
2021-05-06T11:02:48
CC-MAIN-2021-21
1620243988753.91
[]
docs.netapp.com
NOTE: An interactive version of this tutorial is available on Colab. Download the Jupyter notebook Clustering and Classification using Knowledge Graph Embeddings¶ In this tutorial we will explore how to use the knowledge embeddings generated by a graph of international football matches (since the 19th century) in clustering and classification tasks. Knowledge graph embeddings are typically used for missing link prediction and knowledge discovery, but they can also be used for entity clustering, entity disambiguation, and other downstream tasks. The embeddings are a form of representation learning that allow linear algebra and machine learning to be applied to knowledge graphs, which otherwise would be difficult to do. We will cover in this tutorial: Creating the knowledge graph (i.e. triples) from a tabular dataset of football matches Training the ComplEx embedding model on those triples Evaluating the quality of the embeddings on a validation set Clustering the embeddings, comparing to the natural clusters formed by the geographical continents Applying the embeddings as features in classification task, to predict match results Evaluating the predictive model on a out-of-time test set, comparing to a simple baseline We will show that knowledge embedding clusters manage to capture implicit geographical information from the graph and that they can be a useful feature source for a downstream machine learning classification task, significantly increasing accuracy from the baseline. Requirements¶ A Python environment with the AmpliGraph library installed. Please follow the install guide. Some sanity check: import numpy as np import pandas as pd import ampligraph ampligraph.__version__ '1.1-dev' Dataset¶ We will use the International football results from 1872 to 2019 available on Kaggle (public domain). It contains over 40 thousand international football matches. Each row contains the following information: Match date Home team name Away team name Home score (goals including extra time) Away score (goals including extra time) Tournament (whether it was a friendly match or part of a tournament) City where match took place Country where match took place Whether match was on neutral grounds This dataset comes in a tabular format, therefore we will need to construct the knowledge graph ourselves. import requests url = '' open('football_results.csv', 'wb').write(requests.get(url).content) 3033782 df = pd.read_csv("football_results.csv").sort_values("date") df.isna().sum() date 0 home_team 0 away_team 0 home_score 2 away_score 2 tournament 0 city 0 country 0 neutral 0 dtype: int64 Dropping matches with unknown score: df = df.dropna() The training set will be from 1872 to 2014, while the test set will be from 2014 to present date. Note that a temporal test set makes any machine learning task harder compared to a random shuffle. df["train"] = df.date < "2014-01-01" df.train.value_counts() True 35714 False 5057 Name: train, dtype: int64 Knowledge graph creation¶ We are going to create a knowledge graph from scratch based on the match information. The idea is that each match is an entity that will be connected to its participating teams, geography, characteristics, and results. The objective is to generate a new representation of the dataset where each data point is an triple in the form: <subject, predicate, object> First we need to create the entities (subjects and objects) that will form the graph. We make sure teams and geographical information result in different entities (e.g. the Brazilian team and the corresponding country will be different). # Entities naming df["match_id"] = df.index.values.astype(str) df["match_id"] = "Match" + df.match_id df["city_id"] = "City" + df.city.str.title().str.replace(" ", "") df["country_id"] = "Country" + df.country.str.title().str.replace(" ", "") df["home_team_id"] = "Team" + df.home_team.str.title().str.replace(" ", "") df["away_team_id"] = "Team" + df.away_team.str.title().str.replace(" ", "") df["tournament_id"] = "Tournament" + df.tournament.str.title().str.replace(" ", "") df["neutral"] = df.neutral.astype(str) Then, we create the actual triples based on the relationship between the entities. We do it only for the triples in the training set (before 2014). triples = [] for _, row in df[df["train"]].iterrows(): # Home and away information home_team = (row["home_team_id"], "isHomeTeamIn", row["match_id"]) away_team = (row["away_team_id"], "isAwayTeamIn", row["match_id"]) # Match results if row["home_score"] > row["away_score"]: score_home = (row["home_team_id"], "winnerOf", row["match_id"]) score_away = (row["away_team_id"], "loserOf", row["match_id"]) elif row["home_score"] < row["away_score"]: score_away = (row["away_team_id"], "winnerOf", row["match_id"]) score_home = (row["home_team_id"], "loserOf", row["match_id"]) else: score_home = (row["home_team_id"], "draws", row["match_id"]) score_away = (row["away_team_id"], "draws", row["match_id"]) home_score = (row["match_id"], "homeScores", np.clip(int(row["home_score"]), 0, 5)) away_score = (row["match_id"], "awayScores", np.clip(int(row["away_score"]), 0, 5)) # Match characteristics tournament = (row["match_id"], "inTournament", row["tournament_id"]) city = (row["match_id"], "inCity", row["city_id"]) country = (row["match_id"], "inCountry", row["country_id"]) neutral = (row["match_id"], "isNeutral", row["neutral"]) year = (row["match_id"], "atYear", row["date"][:4]) triples.extend((home_team, away_team, score_home, score_away, tournament, city, country, neutral, year, home_score, away_score)) Note that we treat some literals (year, neutral match, home score, away score) as discrete entities and they will be part of the final knowledge graph used to generate the embeddings. We limit the number of score entities by clipping the score to be at most 5. Below we can see visualise a subset of the graph related to the infamous Maracanazo: The whole graph related to this match can be summarised by the triples below: triples_df = pd.DataFrame(triples, columns=["subject", "predicate", "object"]) triples_df[(triples_df.subject=="Match3129") | (triples_df.object=="Match3129")] Training knowledge graph embeddings¶ We split our training dataset further into training and validation, where the new training set will be used to the knowledge embedding training and the validation set will be used in its evaluation. The test set will be used to evaluate the performance of the classification algorithm built on top of the embeddings. What differs from the standard method of randomly sampling N points to make up our validation set is that our data points are two entities linked by some relationship, and we need to take care to ensure that all entities are represented in train and validation sets by at least one triple. To accomplish this, AmpliGraph provides the train_test_split_no_unseen function. from ampligraph.evaluation import train_test_split_no_unseen X_train, X_valid = train_test_split_no_unseen(np.array(triples), test_size=10000) print('Train set size: ', X_train.shape) print('Test set size: ', X_valid.shape) Train set size: (382854, 3) Test set size: (10000, 3) AmpliGraph has implemented several Knowledge Graph Embedding models (TransE, ComplEx, DistMult, HolE), but to begin with we’re just going to use the ComplEx model, which is known to bring state-of-the-art predictive power. The hyper-parameter choice was based on the best results we have found so far for the ComplEx model applied to some benchmark datasets used in the knowledge graph embeddings community. This tutorial does not cover hyper-parameter tuning. from ampligraph.latent_features import ComplEx model = ComplEx(batches_count=50, epochs=300, k=100, eta=20, optimizer='adam', optimizer_params={'lr':1e-4}, loss='multiclass_nll', regularizer='LP', regularizer_params={'p':3, 'lambda':1e-5}, seed=0, verbose=True) Lets go through the parameters to understand what’s going on: batches_count: the number of batches in which the training set is split during the training loop. If you are having into low memory issues than settings this to a higher number may help. epochs: the number of epochs to train the model for. k: the dimensionality of the embedding space. eta($\eta$) : the number of negative, or false triples that must be generated at training runtime for each positive, or true triple. optimizer: the Adam optimizer, with a learning rate of 1e-4 set via the optimizer_params kwarg. loss: pairwise loss, with a margin of 0.5 set via the loss_params kwarg. regularizer: $L_p$ regularization with $p=3$, i.e. l3 regularization. $\lambda$ = 1e-5, set via the regularizer_params kwarg. seed: random seed, used for reproducibility. verbose- displays a progress bar. Training should take around 10 minutes on a modern GPU: import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) model.fit(X_train) Average Loss: 0.400814: 100%|██████████| 300/300 [09:58<00:00, 2.01s/epoch] Evaluating knowledge embeddings¶ AmpliGraph aims to follow scikit-learn’s ease-of-use design philosophy and simplify everything down to fit, evaluate, and predict functions. However, there are some knowledge graph specific steps we must take to ensure our model can be trained and evaluated correctly. The first of these is defining the filter that will be used to ensure that no negative statements generated by the corruption procedure are actually positives. This is simply done by concatenating our train and test sets. Now when negative triples are generated by the corruption strategy, we can check that they aren’t actually true statements. filter_triples = np.concatenate((X_train, X_valid)) For this we’ll use the evaluate_performance function: X- the data to evaluate on. We’re going to use our test set to evaluate. model- the model we previously trained. filter_triples- will filter out the false negatives generated by the corruption strategy. use_default_protocol- specifies whether to use the default corruption protocol. If True, then subj and obj are corrupted separately during evaluation. verbose- displays a progress bar. from ampligraph.evaluation import evaluate_performance ranks = evaluate_performance(X_valid, model=model, filter_triples=filter_triples, use_default_protocol=True, verbose=True) 100%|██████████| 10000/10000 [02:09<00:00, 77.33it/s] We’re going to use the mrr_score (mean reciprocal rank) and hits_at_n_score functions. mrr_score: The function computes the mean of the reciprocal of elements of a vector of rankings ranks. hits_at_n_score: The function computes how many elements of a vector of rankings ranks make it to the top n positions. from ampligraph.evaluation import mr_score, mrr_score, hits_at_n_score mr = mr_score(ranks) mrr = mrr_score(ranks) print("MRR: %.2f" % (mrr)) print("MR: %.2f" % (mr)) hits_10 = hits_at_n_score(ranks, n=10) print("Hits@10: %.2f" % (hits_10)) hits_3 = hits_at_n_score(ranks, n=3) print("Hits@3: %.2f" % (hits_3)) hits_1 = hits_at_n_score(ranks, n=1) print("Hits@1: %.2f" % (hits_1)) MRR: 0.26 MR: 4365.06 Hits@10: 0.36 Hits@3: 0.29 Hits@1: 0.19 We can interpret these results by stating that the model will rank the correct entity within the top-3 possibilities 29% of the time. By themselves, these metrics are not enough to conclude the usefulness of the embeddings in a downstream task, but they suggest that the embeddings have learned a reasonable representation enough to consider using them in more tasks. Clustering and embedding visualization¶ To evaluate the subjective quality of the embeddings, we can visualise the embeddings on 2D space and also cluster them on the original space. We can compare the clustered embeddings with natural clusters, in this case the continent where the team is from, so that we have a ground truth to evaluate the clustering quality both qualitatively and quantitatively. Requirements: seaborn adjustText incf.countryutils For seaborn and adjustText, simply install them with pip install seaborn adjustText. For incf.countryutils, do the following steps: git clone cd incf.countryutils pip install .``` ```python from sklearn.decomposition import PCA import matplotlib.pyplot as plt import seaborn as sns from adjustText import adjust_text from incf.countryutils import transformations %matplotlib inline We create a map from the team ID (e.g. “TeamBrazil”) to the team name (e.g. “Brazil”) for visualization purposes. id_to_name_map = {**dict(zip(df.home_team_id, df.home_team)), **dict(zip(df.away_team_id, df.away_team))} We now create a dictionary with the embeddings of all teams: teams = pd.concat((df.home_team_id[df["train"]], df.away_team_id[df["train"]])).unique() team_embeddings = dict(zip(teams, model.get_embeddings(teams))) We use PCA to project the embeddings from the 200 space into 2D space: embeddings_2d = PCA(n_components=2).fit_transform(np.array([i for i in team_embeddings.values()])) We will cluster the teams embeddings on its original 200-dimensional space using the find_clusters in our discovery API: from ampligraph.discovery import find_clusters from sklearn.cluster import KMeans clustering_algorithm = KMeans(n_clusters=6, n_init=50, max_iter=500, random_state=0) clusters = find_clusters(teams, model, clustering_algorithm, mode='entity') This helper function uses the incf.countryutils library to translate country names to their corresponding continents. def cn_to_ctn(country): try: return transformations.cn_to_ctn(id_to_name_map[country]) except KeyError: return "unk" This dataframe contains for each team their projected embeddings to 2D space via PCA, their continent and the KMeans cluster. This will be used alongisde Seaborn to make the visualizations. plot_df = pd.DataFrame({"teams": teams, "embedding1": embeddings_2d[:, 0], "embedding2": embeddings_2d[:, 1], "continent": pd.Series(teams).apply(cn_to_ctn), "cluster": "cluster" + pd.Series(clusters).astype(str)}) We plot the results on a 2D scatter plot, coloring the teams by the continent or cluster and also displaying some individual team names. We always display the names of the top 20 teams (according to FIFA rankings) and a random subset of the rest. top20teams = ["TeamBelgium", "TeamFrance", "TeamBrazil", "TeamEngland", "TeamPortugal", "TeamCroatia", "TeamSpain", "TeamUruguay", "TeamSwitzerland", "TeamDenmark", "TeamArgentina", "TeamGermany", "TeamColombia", "TeamItaly", "TeamNetherlands", "TeamChile", "TeamSweden", "TeamMexico", "TeamPoland", "TeamIran"] def plot_clusters(hue): np.random.seed(0) plt.figure(figsize=(12, 12)) plt.title("{} embeddings".format(hue).capitalize()) ax = sns.scatterplot(data=plot_df[plot_df.continent!="unk"], x="embedding1", y="embedding2", hue=hue) texts = [] for i, point in plot_df.iterrows(): if point["teams"] in top20teams or np.random.random() < 0.1: texts.append(plt.text(point['embedding1']+0.02, point['embedding2']+0.01, str(point["teams"]))) adjust_text(texts) The first visualisation of the 2D embeddings shows the natural geographical clusters (continents), which can be seen as a form of the ground truth: plot_clusters("continent") We can see above that the embeddings learned geographical similarities even though this information was not explicit on the original dataset. Now we plot the same 2D embeddings but with the clusters found by K-Means: plot_clusters("cluster") We can see that K-Means found very similar cluster to the natural geographical clusters by the continents. This shows that on the 200-dimensional embedding space, similar teams appear close together, which can be captured by a clustering algorithm. Our evaluation of the clusters can be more objective by using a metric such as the adjusted Rand score, which varies from -1 to 1, where 0 is random labelling and 1 is a perfect match: from sklearn import metrics metrics.adjusted_rand_score(plot_df.continent, plot_df.cluster) 0.39274828260196304 Classification¶ We will use the knowledge embeddings to predict future matches as a classification problem. We can model it as a multiclass problem with three classes: home team wins, home team loses, draw. The embeddings are used directly as features to a XGBoost classifier. First we need to determine the target: df["results"] = (df.home_score > df.away_score).astype(int) + \ (df.home_score == df.away_score).astype(int)*2 + \ (df.home_score < df.away_score).astype(int)*3 - 1 df.results.value_counts(normalize=True) 0 0.486473 2 0.282456 1 0.231071 Name: results, dtype: float64 Now we create a function that extracts the features (knowledge embeddings for home and away teams) and the target for a particular subset of the dataset: def get_features_target(mask): def get_embeddings(team): return team_embeddings.get(team, np.full(200, np.nan)) X = np.hstack((np.vstack(df[mask].home_team_id.apply(get_embeddings).values), np.vstack(df[mask].away_team_id.apply(get_embeddings).values))) y = df[mask].results.values return X, y clf_X_train, y_train = get_features_target((df["train"])) clf_X_test, y_test = get_features_target((~df["train"])) clf_X_train.shape, clf_X_test.shape ((35714, 400), (5057, 400)) Note that we have 200 features by team because the ComplEx model uses imaginary and real number for its embeddings, so we have twice as many parameters as defined by k=100 in its model definition. We also have some missing information from the embeddings of the entities (i.e. teams) that only appear in the test set, which are unlikely to be correctly classified: np.isnan(clf_X_test).sum()/clf_X_test.shape[1] 105.0 First install xgboost with pip install xboost. from xgboost import XGBClassifier Create a multiclass model with 500 estimators: clf_model = XGBClassifier(n_estimators=500, max_depth=5, objective="multi:softmax") Fit the model using all of the training samples: clf_model.fit(clf_X_train, y_train) XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5, min_child_weight=1, missing=None, n_estimators=500, n_jobs=1, nthread=None, objective='multi:softprob', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=True, subsample=1) The baseline accuracy for this problem is 47%, as that is the frequency of the most frequent class (home team wins): df[~df["train"]].results.value_counts(normalize=True) 0 0.471030 2 0.287325 1 0.241645 Name: results, dtype: float64 metrics.accuracy_score(y_test, clf_model.predict(clf_X_test)) 0.5378683013644453 In conclusion, while the baseline for this classification problem was 47%, with just the knowledge embeddings alone we were able to build a classifier that achieves 54% accuracy. As future work, we could add more features to the model (not embeddings related) and tune the model hyper-parameters.
https://docs.ampligraph.org/en/1.3.0/tutorials/ClusteringAndClassificationWithEmbeddings.html
2021-05-06T09:36:49
CC-MAIN-2021-21
1620243988753.91
[array(['../_images/FootballGraph.png', '../_images/FootballGraph.png'], dtype=object) array(['../_images/output_53_0.png', '../_images/output_53_0.png'], dtype=object) array(['../_images/output_55_0.png', '../_images/output_55_0.png'], dtype=object) ]
docs.ampligraph.org
NAT gateways. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe-nat-gate: NatGateways describe-nat-gateways [--dry-run | --no-dry-run] [--filter <value>] [--nat-gateway-ids <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items . --filter (list). ", ...] } ... ] --nat-gateway-ids (list) One or more NAT gateway IDs. ) If the NAT gateway could not be created, specifies the error message for the failure, that corresponds to the error code. - For InsufficientFreeAddressesInSubnet: "Subnet has insufficient free addresses to create this NAT gateway" - For Gateway.NotAttached: "Network vpc-xxxxxxxx has no Internet gateway attached" - For InvalidAllocationID.NotFound: "Elastic IP address eipalloc-xxxxxxxx could not be associated with this NAT gateway" - For Resource.AlreadyAssociated: "Elastic IP address eipalloc-xxxxxxxx is already associated" - For InternalError: "Network interface eni-xxxxxxxx, created and used internally by this NAT gateway is in an invalid state. Please try again." - For InvalidSubnetID.NotFound: "The specified subnet subnet-xxxxxxxx does not exist or could not be found.") The state of the NAT gateway. - pending : The NAT gateway is being created and is not ready to process traffic. - failed : The NAT gateway could not be created. Check the failureCode and failureMessage fields for the reason. - available : The NAT gateway is able to process traffic. This status remains until you delete the NAT gateway, and does not indicate the health of the NAT gateway. - deleting : The NAT gateway is in the process of being terminated and may still be processing traffic. - deleted : The NAT gateway has been terminated and is no longer processing traffic. SubnetId -> (string)The ID of the subnet in which the NAT gateway is located. VpcId -> (string)The ID of the VPC in which the NAT gateway is located. Tags -> (list) The tags for the NAT.
https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-nat-gateways.html
2021-05-06T10:50:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.aws.amazon.com
A newer version of this page is available. Switch to the current version. ButtonEditClickEventHandler Delegate A method that will handle the ASPxButtonEditBase.ButtonClick event. Namespace: DevExpress.Web Assembly: DevExpress.Web.v19.1.dll Declaration public delegate void ButtonEditClickEventHandler( object source, ButtonEditClickEventArgs e ); Public Delegate Sub ButtonEditClickEventHandler( source As Object, e As ButtonEditClickEventArgs ) Parameters Remarks When creating a ButtonEditClickEvent
https://docs.devexpress.com/AspNet/DevExpress.Web.ButtonEditClickEventHandler?v=19.1
2021-05-06T09:52:40
CC-MAIN-2021-21
1620243988753.91
[]
docs.devexpress.com
After you generate the SAML metadata on the Unified Access Gateway appliance, you can copy that data to the back-end service provider. Copying this data to the service provider is part of the process of creating a SAML authenticator so that Unified Access Gateway can be used as an identity provider. For a Horizon Air server, see the product documentation for specific instructions.
https://docs.vmware.com/en/Unified-Access-Gateway/3.6/com.vmware.uag-36-deploy-config.doc/GUID-FDBADC3D-8EDE-4899-8F11-E618EB1B4156.html
2021-05-06T11:09:12
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
In 0.10.4 the external_nodes system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master. The old external_nodes option has been removed. The master tops system contains a number of subsystems that are loaded via the Salt loader interfaces like modules, states, returners, runners, etc. Using the new master_tops option is simple: master_tops: ext_nodes: cobbler-external-nodes master_tops: reclass: inventory_base_uri: /etc/reclass classes_uri: roles It's also possible to create custom master_tops modules. These modules must go in a subdirectory called tops in the extension_modules directory. The extension_modules directory is not defined by default (the default /srv/salt/_modules will NOT work as of this release) Custom tops modules are written like any other execution module, see the source for the two modules above for examples of fully functional ones. Below is a degenerate example: /etc/salt/master: extension_modules: /srv/salt/modules master_tops: customtop: True /srv/salt/modules/tops/customtop.py:
https://ansible-cn.readthedocs.io/en/latest/topics/master_tops/index.html
2021-05-06T10:36:31
CC-MAIN-2021-21
1620243988753.91
[]
ansible-cn.readthedocs.io
You're viewing Apigee Edge documentation. View Apigee X documentation. If you're an Apigee Edge Cloud user, there are situations where you may need to request assistance from Support to perform certain tasks (by raising a service request), such as enabling or disabling product features, creating or deleting organizations and environments, or configuring resources. See Apigee Support for information. For other needs, organization administrators have permission to accomplish tasks on their own, such as managing organization users, and creating and managing keystores and truststores. If you're a system administrator of an Apigee Edge for Private Cloud environment, you can do just about anything that Support can do for Apigee Edge Cloud users. Apigee service requests describes different types of service requests you can raise with Support, provides instructions on making those requests, and shows which tasks are self-service and require no intervention from Google.
https://docs.apigee.com/api-platform/system-administration/service-requests?hl=it
2021-05-06T10:13:44
CC-MAIN-2021-21
1620243988753.91
[]
docs.apigee.com
Quickstart¶ Out of the box setup: This is self explanatory, but if you’re using (Pushover, Pushsafer, or IFTTT) then the trigBoard will connect to the WiFI network selected here using DHCP. If you would rather use a static IP, you can also enable this by selecting the check box. Most users just use DHCP, but may have faster connection times by using a static IP: This is the time the trigBoard will wait before failing to connect to the WiFi network and going back to sleep. For most users, 5 seconds is good enough. Just be careful to not make this too big because if you ever relocate the trigBoard to a new location and need to reconfigure, you’ll have to wait for this timeout before the configurator mode is enabled. I would suggest starting with 5 seconds and if you notice missed events, increase to 10 seconds. If that still gives trouble, then there may be a problem with the WiFi router or signal strength at this location. This is the name that will be sent along with the push notification message - normally would be “Front Door” “Garage Door” “Mail Box” The trigBoard will wake on any change of the sensor input - open and close. This setting here is used to decide if a notification is sent. This depends on the application and how you’re planning to use the trigBoard. For a mail box, just open detection might make sense, but for a garage, both open and close would be useful. When you select which to wake on, the messages are enabled for that selection. These could be “Has Opened” or “Has Closed”, because the firmware will combine the trigBoard name with this message. “Garage Has Opened” Warning! This feature was added for very specific applications where the sensor input rapidly opens and closes. Most users would leave this unchecked. There is a complex analog trigger system designed into the trigBoard and it normally detects the wake event based on the current status of the contact. But in some applications, the contact opens and closes very quickly. For this, the high speed trigger will change to use latched circuitry to determine the wake event. But again, this is more for specific applications and should be left unchecked. A message is also sent when the wake button is pressed - this is what that message will be. This is very useful for testing the board and some users have written custom firmware to use the wake button for more advanced features. The timer on the trigBoard is extremely useful. This automatically wakes the board up at a specific interval to check various conditions like low battery or if the contact is still closed/open. It is HIGHLY recommended to keep this value as high as possible, so if a check of once an hour (60 minutes) can work for your application, then set for that. Some applications like a checking if the garage door is still open may need a faster interval like 15 minutes, but just note that this will have some impact on battery life. So ideally the units are set to Minutes, but Seconds were added in as an available feature. Note that this can be useful when developing your own firmware for waking the board automatically when uploading to the board. Like I’ll set to 10seconds when developing, so I never have to physically wake the board to upload. If the timer has been enabled to check contact status, then these are the messages that will be sent - usually set to “is Still Open” or “is Still Closed”, then the combined message might be “The Garage is Still Open” For most applications monitoring doors/windows, the check for if the contact is still open is the only one used. This is the threshold that if the battery is less than this value, a BATTERY LOW message is sent out at the timer interval. Because the trigBoard supports a wide variety of battery options, a setting here needs to be set. For a 4.2V rechargeable lithium battery, maybe 3.3V or so would work. Then for two AA/AAA batteries, set for 2.5V 5 - The remaining settings determine the push notification service, so see Supported Services Page Note that the “Battery Voltage Calibration Offset” is set during factory programming.
https://trigboard-docs.readthedocs.io/en/latest/quickstart.html
2021-05-06T08:46:43
CC-MAIN-2021-21
1620243988753.91
[array(['_images/trigBoardweverything.jpg', '_images/trigBoardweverything.jpg'], dtype=object)]
trigboard-docs.readthedocs.io
TagResource Adds the specified tags to the specified resource. If a tag key already exists, the existing value is replaced with the new value. Request Syntax POST /tags/ resourceARNHTTP/1.1 Content-type: application/json { "tags": { " string" : " string" } } URI Request Parameters The request uses the following URI parameters. - resourceARN The Amazon Resource Name (ARN) of the bot, bot alias, or bot channel to tag. Length Constraints: Minimum length of 1. Maximum length of 1011. Required: Yes Request Body The request accepts the following data in JSON format. A list of tag keys to add to the resource. If a tag key already exists, the existing value is replaced with the new value. Type: String to string map Map Entries: Minimum number of 0 items. Maximum number of 200 items. Key Length Constraints: Minimum length of 1. Maximum length of 128. Value Length Constraints: Minimum length of 0. Maximum length of 256. Required: Yes:
https://docs.aws.amazon.com/lexv2/latest/dg/API_TagResource.html
2021-05-06T11:02:34
CC-MAIN-2021-21
1620243988753.91
[]
docs.aws.amazon.com
Process Authentication Process AuthenticationThis method allows you to process the authentication request to the user, you will require a Request Token to perform this request, see Request Authentication for more information on how to obtain a Request Token. Example Success ResponseThis response is returned when the user has successfully authenticated to your Application. { "success": true, "response_code": 200, "results": { "access_token": "537fc78c61b5ca2ac89c15fb73559a8092f7791e2cdba84e402bd32f8e738e2e", "granted_permissions": [ "READ_PERSONAL_INFORMATION", "INVOKE_TELEGRAM_NOTIFICATIONS", "READ_EMAIL_ADDRESS", "READ_TELEGRAM_CLIENT", "READ_TODO", "MANAGE_TODO", "SYNC_APPLICATION_SETTINGS", "READ_USERNAME", "GET_USER_DISPLAY" ], "expires_timestamp": 1608439904 } } Example Awaiting ResponseThis is a normal and expected response telling you that the server is waiting for the user to authenticate to your Application, at this stage you should poll the request until the results has changed to another error or success response. { "success": false, "response_code": 400, "error": { "error_code": 41, "message": "AWAITING AUTHENTICATION", "type": "COA" } } Process Authentication Response Structure Application Permissions To get more information about what permissions a Application can use and what do they mean, see Application Permissions. Note that the user can deny certain permissions that your Application requests, so for example if you request the ability to view the users Email Address then the user can deny that request but still authenticate to your Application.
https://docs.intellivoid.net/intellivoid/v1/coa/process_authentication
2021-05-06T09:56:10
CC-MAIN-2021-21
1620243988753.91
[]
docs.intellivoid.net
This page details the Player settings specific to the Facebook platform. For a description of the general Player settings, see Player. You can find documentation for the properties in the following sections: Note: Although the Resolution and Presentation panel appears on the Facebook platform’s Player settings, there are no settings on the panel. Also, the only settings on the Splash Image panel are the common Splash Screen settings. Since the Facebook build target uses the existing and Windows Standalone build targets, the Player settingsSettings that let you set various player-specific options for the final game built by Unity. More info See in Glossary for those targets also apply. Enable the Override for Facebook checkbox to assign a custom icon for your standalone game. You can upload different sizes of the icon to fit each of the squares provided. Use these settings to customize a range of options organized into the following groups: Use these settings to customize how Unity renders your game for the Facebook.
https://docs.unity3d.com/2019.2/Documentation/Manual/class-PlayerSettingsFacebook.html
2021-05-06T11:03:34
CC-MAIN-2021-21
1620243988753.91
[]
docs.unity3d.com
INET6_ATON() — Converts an IPv6 internet address from a string to a VARBINARY(16) value INET6_ATON( {string} ) The INET6_ATON() function converts a VARCHAR value representing an IPv6 internet address in hexidecimal notation to a 16-byte VARBINARY value in network byte order. The VARCHAR value must consist of up to eight hexidecimal values separated by colons, such as "2600:141b:4:290::2add", or a null value. Note that in IPv6 addresses, two colons together ("::") can and should be used in place of two or more consecutive zero values in the sequence. You can use the INET6_NTOA() function to reverse the conversion or you can use the INET_ATON and INET_NTOA functions to perform similar conversions on IPv4 addresses.
https://docs.voltdb.com/UsingVoltDB/sqlfuncinet6aton.php
2021-05-06T09:58:14
CC-MAIN-2021-21
1620243988753.91
[]
docs.voltdb.com
Click Add Item To Create a New Item. The following screen appears, fill the following fields. Item can be Note that if item is retail, it must be inventory. Items List. Click Add Expense To Create a New Expense Item. The following screen appears, fill the following fields and save. Expenses List. Click Add Category. Click on items to view items that belong to this category.Note that Category is chosen when the Consumables/Retail is created. The following item belongs to this category. Add Package. Fill the following fields then click save, package section will appear at the bottom so that items can be added. Package can have Inventory,Retail and Service items. Packages list. Click Add Service To Create a New Service. If service has consumable(has is consumable items) check it and click save .Consumables section will appear. Select item to add with cycle amount.Cycle is the amount we are using every time. For example oil quantity is 40 ml, cycle is 0.5 => every time i use this service i consume 20 ml. Services List.
http://docs.imhotep-med.com/items.aspx
2021-05-06T10:20:05
CC-MAIN-2021-21
1620243988753.91
[]
docs.imhotep-med.com
Inspector will send SMS notifications using your Twilio account, so first you need to configure your Twilio account keys to allow Inspector to connect to Twilio API on your behalf. In Inspector, navigate to the project you want to activate SMS notifications. Click Settings → Notifications Channels Click Configure in the "Twilio - SMS" channel to open the configuration screen You need three mandatory parameters from your Twilio console: Account SID Auth Token From number You can find these information directly in your Twilio console: You can add as many phone numbers you want to receive selected notifications. Separate each number with a comma ( , ). Each number will receive a separate SMS, so each of them contributes to the consumption of your Twilio credit. Remember to click "Save" after any change in the channel settings. If you want to disable the channel click "Disconnect" and confirm your choise.
https://docs.inspector.dev/notifications/twilio-sms
2021-05-06T09:54:45
CC-MAIN-2021-21
1620243988753.91
[]
docs.inspector.dev
Affinity and anti-affinity rules allow you to spread a group of virtual machines across different ESXi hosts or keep a group of virtual machines on a particular ESXi host. An affinity rule places a group of virtual machines on a specific host so that you can easily audit the usage of those virtual machines. An anti-affinity rule places a group of virtual machines across different hosts, which prevents all virtual machines from failing at once in the event that a single host fails. Affinity and anti-affinity rules are either required or preferred. - Required rule - If the affinity or anti-affinity rules cannot be satisfied, the virtual machines added to the rule do not power on. - Preferred rule - If the affinity or anti-affinity rules are violated, the cluster or host still powers on the virtual machines. For example, if you have an anti-affinity rule between two virtual machines but only one physical host is available, a rule which is required (strong affinity) does not allow both virtual machines to power on. If the anti-affinity rule is preferred (weak affinity), both virtual machines are allowed to power on. Related Videos
https://docs.vmware.com/en/VMware-Cloud-Director/10.2/VMware-Cloud-Director-Tenant-Portal-Guide/GUID-103BE81A-0762-45C6-915D-19B2B75DEE05.html
2021-05-06T08:49:02
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
Welcome to the Atlan Data Wiki—a fun, helpful encyclopedia for the data universe. Learn about all things data. From data sources and formats to big data technologies and Machine Learning (ML) algorithms, we’ve got you covered. 😎 The Atlan Data Wiki started as an internal initiative. At Atlan, we’re growing and welcoming new folks to the team—from data scientists and engineers to marketers and sales folks (aka the humans of data). We realized that understanding the data universe isn’t easy when you’re just getting started. So we decided to do something about it. That's how The Atlan Data Wiki was born! Seeing as we’re all about the community, we opened it up for the humans of data everywhere. Now don't let jargon throw you off your game! 💪
https://docs.atlan.com/community/data-wiki
2021-05-06T10:26:18
CC-MAIN-2021-21
1620243988753.91
[]
docs.atlan.com
Server-side encryption allows you to protect your object data at rest. StorageGRID encrypts the data as it writes the object and decrypts the data when you access the object. While StorageGRID manages all object encryption and decryption operations, you must manage the encryption keys you provide. To encrypt an object with a unique key managed by StorageGRID, you use the following request header: x-amz-server-side-encryption
https://docs.netapp.com/sgws-113/topic/com.netapp.doc.sg-s3/GUID-3F6CC591-57E9-4156-859F-014D39DDDE39.html
2021-05-06T10:57:40
CC-MAIN-2021-21
1620243988753.91
[]
docs.netapp.com
Read next: - About O&O DiskImage 7 - System requirements - Features at a glance - Installation and registration - Quick Start - Settings for drive imaging - File backup options - Settings for the restoration of drives - Machine independent restoration - Settings for Cloning drives - Start directly from bootable disk - Scheduling functions - Tools - - View - Frequently asked questions - End user license agreement 3.7 (EULA)
https://docs.oo-software.com/en/oodiskimage7
2021-05-06T09:51:43
CC-MAIN-2021-21
1620243988753.91
[]
docs.oo-software.com
Release 7.64.0 Release period: 2021-04-07 to 2021-04-14 This release includes the following issues: - Fix GCP Cost Collection - Bug related to Project-Dashboard-Button fixed - Improved error messages Ticket DetailsTicket Details Fix GCP Cost CollectionFix GCP Cost Collection Audience: Operator Component: kraken-worker DescriptionDescription There was an issue with GCP cost collection, that was caused by infrequent gaps in GCP data delivery. A mechanism has been introduced to prevent this in the future and make GCP cost collection more robust against irregular data delivery. Bug related to Project-Dashboard-Button fixedBug related to Project-Dashboard-Button fixed Audience: Customer, Partner Component: panel DescriptionDescription We fixed a Bug related to Project-Dashboard-Button. It was possible to click on a button even if the button was disabled and then a blank screen appeared. Improved error messagesImproved error messages Audience: User, Operator Component: web DescriptionDescription The error message system got a general overhaul. This will result in clearer error messages for the user in the meshPanel. Please let us know if you encounter error messages which are hard to understand. We continuously work on improving them for you!
https://docs.meshcloud.io/blog/2021/04/14/Release-0.html
2021-05-06T09:56:08
CC-MAIN-2021-21
1620243988753.91
[]
docs.meshcloud.io
method install_method_cache Documentation for method install_method_cache assembled from the following types: class Metamodel::Primitives From Metamodel::Primitives (Metamodel::Primitives).
http://docs.perl6.org/routine/install_method_cache
2019-09-15T13:49:47
CC-MAIN-2019-39
1568514571360.41
[]
docs.perl6.org
On-premise to Amazon S3 replication in HDFS The process for creating a replication job from on-premise to Amazon S3 is similar to creating one for on-premise to on-premise. The primary difference is that, you must register your cloud account credentials with DLM App instance, so that DLM can access your cloud storage. Attention: Replication of HDFS data from on-premise to cloud is a limited GA feature in DPS 1.1. The HDFS data that you replicate to cloud requires security policies outside the Hadoop system, so you should work with Hortonworks support to ensure proper configuration of your environment. This does not apply to Hive replication to cloud.
https://docs.cloudera.com/HDPDocuments/DLM1/DLM-1.5.0/administration/content/dlm_on-premise_to_amazon_s3_replication_in_hdfs.html
2019-09-15T12:55:42
CC-MAIN-2019-39
1568514571360.41
[]
docs.cloudera.com
"Failed on Start (retrying)" status for a "Collect Feedback in SharePoint 2010" workflow in SharePoint Online or SharePoint Server Problem Consider the following scenario: - You're using a Collect Feedback – SharePoint 2010 workflow in SharePoint Online or SharePoint Server. - The tasks list where the feedback items are tracked for the workflow has a column or columns for which the Enforce unique values option is set to Yes. For example, this may apply to the Start Date column. - You start the workflow and select multiple users in the Assign To field, and then you select All at once (parallel) for the Order setting. In this scenario, the workflow doesn't start the feedback process, and instead it reports a status of Failed on Start (retrying). Solution To work around this issue, do one of the following: - Select One at a time (serial) for the Order setting when you start the workflow. - Set the Enforce unique values setting to No for the affected tasks list column or columns in which the workflow tasks are stored. More information When you start the workflow by using All at once (parallel) for the Order setting and when Enforce unique values is set to Yes, the workflow does not start because the values for the column aren't unique. Still need help? Go to Microsoft Community. Feedback
https://docs.microsoft.com/en-us/sharepoint/support/workflows/workflow-reports-retrying-status?redirectSourcePath=%252fnl-nl%252farticle%252fStatus-van-een-werkstroom-Verzamelen-van-Feedback-in-SharePoint-2010-in-SharePoint-Online-of-SharePoint-Server-Is-mislukt-in-Start-nieuwe-poging-0B3EC611-2034-4DBC-995C-EB3367E37CBF
2019-09-15T12:34:21
CC-MAIN-2019-39
1568514571360.41
[]
docs.microsoft.com
Caching the data of page components Many pages contain components (web parts or controls) that load and display data from the Kentico database, or other external sources. For example, when displaying a page with a list of news articles, the system retrieves text from the fields of news pages stored in the database, and then formats the data on the page. Communication with storage spaces and processing of data are common weak points in the performance of pages. Content caching helps the system maximize the efficiency of data. The content cache saves data loaded by page components into the application's memory. Components then re-use the cached data on future requests. The following types of web parts and controls support content caching: - Dedicated data sources - Repeaters and viewers with built-in data sources - Navigation components Configuring content caching You can set up content caching on two levels: - Globally for entire sites (all data components on the given site) - For individual instances of web parts or controls We recommend caching all possible content. You can ensure that components do not display outdated content by setting cache dependencies. Most non-custom data sources have default dependencies that automatically clear the content cache whenever the stored data is modified. The cache is shared between web parts with the same setup on different pages. For example, featured products listings. This can improve the performance of first page loads even when higher-level cache is used. To enable content caching globally: - Open the Settings application. - Select the System -> Performance category. - Type a number of minutes into the Cache content (minutes) setting. The value determines how long the content cache retains data, and must be greater than 0. - Save the settings. With content caching enabled globally, the system caches the structured data of all page components by default. To enable content caching for individual web part instances: - Open the Pages application. - Edit the page containing the web part on the Design tab. - Configure the web part instance (double-click). - Type a number of minutes into the Cache minutes property (in the System settings category). The value determines how long the cache stores the web part's data, and must be greater than 0. Recommended settings: - 1 to 60 minutes, depending on the nature of the content. - In combination with partial caching on the same page, the interval should be about 10 times the interval set for the output cache. - (Optional) Add dependencies via the Cache dependencies property. This allows you to automatically clear the cache based on changes in the source data. - Click Save & Close. The system caches the data loaded by the given web part instance. To disable content caching for specific web parts when content caching is enabled globally: - Configure the web part instance on the Design tab. - Type 0 (zero) into the Cache minutes property (in the System settings category). - Click Save & Close. The web part instance reloads data from the data source without caching. Setting dependencies for content cache Cache dependencies allow the system to automatically clear cached data when related objects are modified. Web parts with data sources provide default dependencies for the content cache, and you can also add your own custom dependencies for individual instances: - Open the Pages application. - Edit the page containing the web part on the Design tab. - Configure the web part instance (double-click). Add the dependencies into the Cache dependencies property (in the System settings category). For more information and examples, see Setting cache dependencies. If you leave the Use default cache dependencies box checked, the system automatically clears the web part's content cache: - Whenever the loaded data changes (depends on the web part) - When you modify the property configuration of the given web part instance Default dependencies do NOT cover data loaded via external or custom data sources. For example, custom query data sources only clear the content cache when the query itself changes, not the loaded data. - Click Save & Close. The system deletes the web part's content cache whenever the specified objects change. With the cache cleared, the web part reloads the data from the source the next time a visitor opens the page. Sharing the content cache between components The content cache stores the data loaded by page components (web parts or controls) under cache keys. By default, the system generates a unique cache key name for each component. The default name contains variables such as the web part ID, the name of the user viewing the page, or the code name of the language selected for the page. If you have multiple web part instances that load exactly the same data, you can share the cached content: - Open the Pages application. - Configure one of the web parts on the Design tab. - Enter a custom key name into the Cache item name property (in the System settings category). - Copy the key name into the Cache item name property of all web part instances that load the same data. When a visitor opens a page containing one of the web parts, the system loads the data and saves it into the cache under the specified key. While the cached content is valid, other web parts to which you assigned the same Cache item name retrieve the data directly from the cache. This setup optimizes loading of content from the database (or other source) and avoids redundant keys in the cache. Tip: You can use macro expressions to create dynamic cache item names based on variables such as query string parameters, or other context data. Caching the page output of web parts In addition to content caching, web parts also support Partial output caching. The partial cache stores the full HTML output code of web part instances. See Caching portions of the page output for more information. Partial caching has the following advantages and disadvantages when compared with content caching: Better efficiency — the partial cache allows the system to directly send the web part's HTML output to the browser, without loading data or processing the web part at all Usable by all web parts with visible output, not just web parts that load structured data Not suitable for web parts whose output changes very frequently (for example lists with filtering) No default dependencies on the data loaded by web parts — you need to set custom partial cache dependencies to ensure that the content of web parts is up-to-date Note: Set up partial caching for the web part that actually renders the content on the page (this may not always be the same web part as the data source). Was this page helpful?
https://docs.kentico.com/k10/configuring-kentico/optimizing-website-performance/configuring-caching/caching-the-data-of-page-components
2018-05-20T17:48:41
CC-MAIN-2018-22
1526794863662.15
[]
docs.kentico.com
Cancel a work order Cancel a work order if the work is no longer necessary or if it is a duplicate of another work order. About this taskWhen you cancel a work order, all associated work order tasks are canceled automatically. Work orders can be canceled by different roles during specific states in the work order life cycle. Procedure Navigate to Work Management > All Work Orders. Open a work order. In Work notes, enter a reason for canceling the work order. Click Cancel. An error message appears if text is not entered into the Work notes field.
https://docs.servicenow.com/bundle/geneva-service-management-for-the-enterprise/page/product/planning_and_policy/task/t_CancelAWorkOrder.html
2018-05-20T17:57:51
CC-MAIN-2018-22
1526794863662.15
[]
docs.servicenow.com
Access Voice from the Utility Bar To continue using Lightning Voice, admins must use the App Manager to make the feature available from the utility bar at the bottom of the page. The utility bar gives your sales reps quick access to commonly used tools. This feature is available in Lightning Experience only. - From Setup, enter App Manager in the Quick Find box, then select App Manager. - Edit an existing Lightning app or click New Lightning App. You can also upgrade a custom Classic app to a Lightning app.If available, the Lightning Sales app contains numerous options preconfigured for sales users. - On the App Options tab, select Lightning Voice. - On the Assign to User Profiles tab, make the app available to relevant user profiles. - Verify the other app details, including the app name, branding information, and available menu items. - Save your changes.To verify your changes, click the App Launcher and select the app that has Lightning Voice enabled. Notify your users about how to now access Lightning Voice. This notification is especially important if Voice isn’t available in their most commonly used app. For more information about the App Manager, see Meet the Lightning Experience App Manager.
https://releasenotes.docs.salesforce.com/en-us/winter17/release-notes/rn_sales_voice_utility_bar.htm
2018-05-20T17:33:27
CC-MAIN-2018-22
1526794863662.15
[array(['release_notes/images/voice_utility_bar.png', 'utility bar in lightning experience'], dtype=object)]
releasenotes.docs.salesforce.com
Clicking in any text field that supports variables opens the Variable Assistant, which shows you the list of variables you can use in the current context, along with a description for each variable. The basic syntax for using variables in scripts and emails is: #{variable.subvariable} For example: #{request.cpuCount} If a variable value may contain spaces, enclose the entire variable in double quotation marks. For example, if a VM's name may contain spaces, use: "#{target.deployedName}" When using custom attributes, always enclose the attribute name in single quotes (whether or not the attribute name contains spaces). The attribute name is case sensitive. For example: #{target.customAttribute['Primary Application']} When passing a variable to a script that uses named arguments, always enclose both the named argument and the string in double quotes, like this: "/Computer:This name has spaces". The following example will not work: script.vbs /vmname:"#{target.deployedName}" You must use the following syntax instead: script.vbs "/vmname:#{target.deployedName}" vCommander allows you to add both Execute Script and Execute Approval Script steps to workflows. Execute Approval Script steps have a special behavior. If the script output returns exit code 0, the workflow proceeds to the next step; if the script returns exit code 1, the workflow fails, and the request is automatically rejected. When configuring the command line for script steps, you must use an absolute path to the executable, even if the executable exists in the Windows path. Otherwise, Java may attempt to execute the command in an incorrect location. The <a> tag is automatically added to links in emails (only the http protocol is supported). For example, if the value of a custom attribute is a link, the value will be formatted as a link in the email. If you do not use HTML markup in the email body, the body is assumed to be plain text; <br> and <p> tags are automatically added for new lines. If you add HTML markup to the email body, however, no additional tags are added. •Dates returned by variables are of the form yyyy/mm/dd hh:mm:ss. The scripts or executables that are called will run under the vCommander service account. Make sure that this account has the appropriate permissions or the script may fail to run. Using Variables to Access vCommander Metadata for Workflows provides links to more information on variables. Workflow Steps Reference provides information on all steps you can add to workflows.
http://docs.embotics.com/syntax_emails_scripts.htm
2018-05-20T17:42:50
CC-MAIN-2018-22
1526794863662.15
[]
docs.embotics.com
Import the VM¶ These screenshots are for VMWare (VirtualBox is nearly the same). Select File -> Import..., choose the graylog.ova file you downloaded, and follow the prompts. Start VM¶ This is what you’ll see when you run the virtual machine. This is where all the Graylog server processes (more on that later) and the database will run. We can split them apart later for performance, but there’s no need to do that right now for a quick overview. Don’t close this window just yet, we’re going to need the IP for the next step. The above steps are also covered in our virtual machine appliance installation page with some additional information. If you do not have DHCP enabled in your network you need to assign a static IP.
http://docs.graylog.org/en/2.2/pages/getting_started/import_run.html
2018-05-20T17:36:34
CC-MAIN-2018-22
1526794863662.15
[array(['../../_images/gs_2-import-vm.png', '../../_images/gs_2-import-vm.png'], dtype=object) array(['../../_images/gs_3-gl-server.png', '../../_images/gs_3-gl-server.png'], dtype=object)]
docs.graylog.org
The maximum number of storage groups msExchMaxStorageGroups attribute of each InformationStore container. If the Exchange Server Analyzer determines that the value for any msExchMaxStorageGroups attribute does not equal 4, an error is displayed. The only value supported for the msExchMaxStorageGroups attribute is 4, and this value should never be changed. Exchange does not function correctly if this value is modified. You must correct the value using an Active Directory editor, such as the Active Directory Service Interfaces (ADSI) Edit snap-in, the LDP (ldp.exe) tool, or any other Lightweight Directory Access Protocol (LDAP) version 3 client. Warning If you incorrectly modify the attributes of Active Directory objects when you use ADSI Edit, the LDP tool, or another LDAP version 3 client, you may cause serious problems. These problems may require that you reinstall Microsoft Windows Server™ 2003, Exchange Server 2003, or both. Modify Active Directory object attributes at your own risk. To correct this error Using ADSI Edit or a similar tool, locate the InformationStore object for this server in Active Directory. The msExchMaxStorageGroups attribute can be found at: CN=Configuration,DC=Domain,CN=com, CN=Services, CN=Microsoft Exchange, CN=OrganizationName, CN=Administrative Groups, CN=AdminGroupName, CN=Servers, CN=ServerName, CN=InformationStore Edit the msExchMaxStorageGroups attribute by entering 4 as the new value. Close the Active Directory editor, and then restart the Exchange Server computer for the change to take effect. For more information about using the LDP tool, see the Microsoft Knowledge Base article 260745, "XADM: Using the LDP Utility to Modify Active Directory Object Attributes" ().
https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-analyzer/aa996673(v=exchg.80)
2018-05-20T18:45:08
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
How to: Compress Snapshot Files (SQL Server Management Studio) Specify that files should be compressed on the Snapshot page of the Publication Properties - <Publication> dialog box. For more information about accessing this dialog box, see How to: View and Modify Publication and Article Properties (SQL Server Management Studio). To compress snapshot files On the Snapshot page of the Publication Properties - <Publication> dialog box: Select Put files in the following folder, and then click Browse to navigate to a directory, or enter the path to the directory in which the snapshot files should be stored. Note. See Also Concepts Changing Publication and Article Properties Compressed Snapshots Initializing a Subscription with a Snapshot Other Resources How to: Configure Snapshot Properties (Replication Transact-SQL Programming) Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms151205(v=sql.90)
2018-05-20T18:01:33
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
RadDropDownTree Configuration Wizard The RadDropDownTree Configuration Wizard lets you initially configure the RadDropDownTree control at design-time. To open the Configuration Wizard, simply click the Open Configuration Wizard link in the RadDropDownTree Smart Tag. Figure 1: RadDropDownTree Smart Tag General Configuration The General tab allows configuration of the Expand, Postback and Filtering behaviour of the control. It also offers CheckBoxes and TextMode settings to be set. An Entries delimiter, Path delimiter and Default value could also be set on this tab of the Configuration Wizard. Figure 2: Configuration Wizard general settings DropDown Settings The DropDownSettings tab exposes configuration of some DropDown properties. The tabl allows you to set height and width dimensions, you can set a CSS class, you can set whether its width should be calculated according to the width of the longest item in the DropDown and you can check a box to indicate whether the DropDown should be opened on initial load of the control. Figure 3: Configuration Wizard DropDown settings Filtering Settings The Filtering Settings tab allows you to configure the filtering behaviour of the RadDropDownTree. You can configure the MinFilterLength, Filter type (StartsWith or Contains), FilterTemplate and Filter Highlight properties here. You can also set a text value for the EmptyMessage filter. Figure 4: Configuration Wizard Filtering settings Button Settings The Button Settings tab allows you to configure if the ShowCheckAll and ShowClear buttons are going to be displayed. Figure 5: Configuration Wizard Button settings Localization The Localization tab allows you to change the default messages for the buttons embedded within the RadDropDownTree (by default they are "Check All" and "Clear"). Figure 5: Configuration Wizard Localization
https://docs.telerik.com/devtools/aspnet-ajax/controls/dropdowntree/design-time/configuration-wizard
2018-05-20T17:49:30
CC-MAIN-2018-22
1526794863662.15
[array(['images/dropdowntree-smart-tag-menu.png', 'RadDropDownTree Smart Tag'], dtype=object) array(['images/ddt-smart-tag-configuration-wizard-general.png', 'RadDropDownTree Configuration Wizard General'], dtype=object) array(['images/ddt-smart-tag-configuration-wizard-dropdown.png', 'RadDropDownTree Configuration Wizard DropDown'], dtype=object) array(['images/ddt-smart-tag-configuration-wizard-filtering.png', 'RadDropDownTree Configuration Wizard Filtering'], dtype=object) array(['images/ddt-smart-tag-configuration-wizard-buttons.png', 'RadDropDownTree Configuration Wizard Button'], dtype=object) array(['images/ddt-smart-tag-configuration-wizard-localization.png', 'RadDropDownTree Configuration Wizard Localization'], dtype=object)]
docs.telerik.com
This database view contains software update metadata. Table 1. VUMV_UPDATES Field Notes UPDATE_ID Unique ID generated by Update Manager TYPE Entity type: virtual machine, virtual appliance, or host TITLE Title DESCRIPTION Description META_UID Unique ID provided by the vendor for this update (for example, MS12444 for Microsoft updates) SEVERITY Update severity information: Not Applicable, Low, Moderate, Important, Critical, HostGeneral, and HostSecurity RELEASE_DATE Date on which this update was released by the vendor DOWNLOAD_TIME Date and time this update was downloaded by the Update Manager server into the Update Manager database SPECIAL_ATTRIBUTE Any special attribute associated with this update (for example, all Microsoft Service packs are marked as Service Pack) COMPONENT Target component, such as HOST_GENERAL, VM_GENERAL, VM_TOOLS, VM_HARDWAREVERSION or VA_GENERAL UPDATECATEGORY Specifies whether the update is a patch or an upgrade. Parent topic: Database Views
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.update_manager.doc/GUID-F201FDC3-7699-4AB4-8F43-F2A37E72F2C2.html
2018-05-20T17:32:11
CC-MAIN-2018-22
1526794863662.15
[]
docs.vmware.com
An Act to repeal subchapter IV (title) of chapter 50 [precedes 50.90]; to amend 20.435 (6) (jm), 50.56 (3), 146.40 (1) (bo), 146.81 (1) (L) and 146.997 (1) (d) 18.; and to create subchapter V (title) of chapter 50 [precedes 50.60], 50.60, 50.65 and subchapter VI (title) of chapter 50 [precedes 50.90] of the statutes; Relating to: pain clinic certification and requirements, granting rule-making authority, and providing a penalty. (FE)
http://docs.legis.wisconsin.gov/2015/proposals/sb272
2018-05-20T17:38:11
CC-MAIN-2018-22
1526794863662.15
[]
docs.legis.wisconsin.gov
OEPrepareDepiction now generates canonical depiction coordinates, i.e., the same 2D coordinates are generated regardless of the atom and bond ordering in the molecule. See the effect of this change in Table: Example of default element color changes. Added the following non-linear color gradient to OESystem: These classes along with the OELinearColorGradient class now derive from the OEColorGradientBase base class. OEAddHighlighting function overloads added that take both atom and bond predicates. OEAlignmentOptions.SetFixedCoords method added that forces the OEPrepareAlignedDepiction function to use the coordinates in the reference molecule instead of depicting them from scratch. OE2DMolDisplayOptions.SetBondLineGapScale method added for changing the gap between the lines of double and triple bonds. OE2DMolDisplayOptions.SetBondLineAtomLabelGapScale method added for changing the gap between the atom labels and the end of the bond lines.
https://docs.eyesopen.com/toolkits/java/depicttk/releasenotes/version2_2_0.html
2018-05-20T17:20:39
CC-MAIN-2018-22
1526794863662.15
[]
docs.eyesopen.com
Capacity Specifications (Analysis Services- Data Mining) The following table specifies the maximum sizes and numbers of data mining objects that you can define in Microsoft SQL Server Analysis Services. For maximum capacities of cubes and other related Analysis Services objects, see Maximum Capacity Specifications (Analysis Services - Multidimensional Data). In practice, the limits may be lower for optimal performance.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ff953220(v=sql.105)
2018-05-20T17:55:09
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
Description / Features Computes a new metric named violation density. This is kind of the "opposite" of the rule compliance metric. The rule compliance metric's formula is: max(0, 100 - weighted_violations / Lines of code * 100). - Configure your filter(s) to replace the Rules Compliance column by the Violation Density one. - Configure your dashboard(s) to replace the Rules Compliance widget by the Violation Density one. Change Log Version 1.2 (1 issues) Version 1.1 (1 issues)
http://docs.codehaus.org/display/SONAR/Violation+Density+Plugin
2014-03-07T09:57:12
CC-MAIN-2014-10
1393999640676
[array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif', None], dtype=object) ]
docs.codehaus.org
Message-ID: <14513940.21237.1394186169083.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_21236_1290637641.1394186169083" ------=_Part_21236_1290637641.1394186169083 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Annogen needs your help! There are lots of things to do. You can= fix up the docs in the a wiki. Download the code &= try it out and see what you think. Browse the source code.=20 Want to write some code for Annogen? Take a look at our issue tracker= for open issues or features that need to be implemented. Take one of those= and try to fix it - don't be shy!=20 There are various ways of communicating with the Annogen community.= =20 Please raise a new issue in our issue tracker
http://docs.codehaus.org/exportword?pageId=15650
2014-03-07T09:56:09
CC-MAIN-2014-10
1393999640676
[]
docs.codehaus.org
Codenvy provides an environment for running and debugging apps in the cloud which reproduces your debugging experience with a conventional desktop IDE. When the debugger is launched, the app runs on the codenvy.com server. Here is an overview of the Debugger features: - go to Run menu and click on Debug Application. Once the Debugger is successfully launched, you will see a message in the Output tab. A random temporary URL is generated, letting you access your app and see changes as you debug it. A Debugger window with two tabs (Breakpoints and Variables) will open (both tabs are empty from the start until you set a breakpoint and interact with the app) - Open a Java class file and set a breakpoint You can set conditions for the breakpoints at Run > Breakpoint properties or use a context menu in the Breakpoints tab (right mouse click) to set the conditions under which a breakpoint stops the app. - Click the application URL and run the app to trigger the first breakpoint. Go back to the Debugger to inspect variables. Below are Debugger the buttons which have the same effect as on a desktop IDE. Note that the default debug session timeout is 10 minutes. 2 mins before the session timeout you will see the warning that the app will be stopped, so you can extend the debugging session for another 10 minutes. Once the debugging session is over you will see a warning message in the Output panel. The best way to understand the power of debugging apps in the cloud is to watch debugging in action. Here’s a short video:
http://docs.codenvy.com/user/how-to-debug-a-java-application/
2014-03-07T09:51:20
CC-MAIN-2014-10
1393999640676
[array(['http://docs.codenvy.com/wp-content/uploads/debugger.png', 'debugger'], dtype=object) array(['http://docs.codenvy.com/wp-content/uploads/debugger_tab-500x251.png', 'debugger_tab'], dtype=object) array(['http://docs.codenvy.com/wp-content/uploads/buttons-500x163.png', 'buttons'], dtype=object) ]
docs.codenvy.com
GSheader The aim of this document is to show three main options:- The menu and article details will vary from site to site and the detail depends on the editor used. Using a simple editor You need to know the address of the menu or artilce that you want to link. The best way to do this is:- If you have sample data on a localhost web site, link something to the Menu 'More about Joomla! This opens a dialogue box When the article is saved - check that the link works. Another editor Some editors allow for choosing menus and articles within the current web site, as the illustration shows. The initial process is the same:- This opens a dialogue box. To choose the right menu:- Find the menu that you want to link. This shows all the options below that menu item. To open it in a new window (rather than in the same page):- Finally - to insert the link - You need to know the address of the page you want to link. The best way to do this is:- In your article:- This opens the dialogue box To open in a new window (rather than in the same page) - this is often helpful for external sites:- Finally - to insert the link -
http://docs.joomla.org/index.php?title=J1.5:Add_links_to_other_pages:_Joomla!_1.5&diff=36573&oldid=36572
2014-03-07T09:54:09
CC-MAIN-2014-10
1393999640676
[]
docs.joomla.org
Welcome to the Haus The Codehaus is an open-source project repository with a strong emphasis on modern languages, focussed on quality components that meet real world needs. We believe in open source as a pragmatic approach to software development, and all our projects are business-friendly in terms of licensing. Enjoy your stay at the haus! Research the Haus See our Manifesto and Project Selection guidelines Join the Haus Support the Haus So you want to Support the Haus
http://docs.codehaus.org/pages/viewpage.action?pageId=77693297
2014-03-07T09:54:06
CC-MAIN-2014-10
1393999640676
[]
docs.codehaus.org
You're considered a factual resident of Canada for tax purposes if you keep residential ties with Canada while travelling or living abroad. The term factual resident means that although you're not in Canada, you're still considered a resident of Canada for income tax purposes. If you're conducting missionary work in another country and you meet certain requirements, you may choose to be a factual resident even if you don't keep residential ties with Canada. If you also establish residential ties in a country with which Canada has a tax treaty and you're considered to be a resident of that country for the purposes of that tax treaty, you may be considered a deemed non-resident of Canada for tax purposes. In either of these cases, contact the International Tax Services Office for more information.
http://docs.quicktaxweb.ca/ty10/english/text/en/common/glossary/d_factual_res.html
2014-03-07T09:51:36
CC-MAIN-2014-10
1393999640676
[]
docs.quicktaxweb.ca
The. It is also a great help to the community if you contribute your code to the research archive. How can you help Jikes RVM? Jikes RVM is a fairly large, complex, and perhaps intimidating system. However, there are many to-do items which don't require extensive Jikes RVM expertise. This page highlights a selection of low-hanging fruit for potential contributions. This list is of course nowhere near comprehensive, but exists just as a sampling of potential activities. -.
http://docs.codehaus.org/pages/viewpage.action?pageId=231736044
2014-03-07T09:53:36
CC-MAIN-2014-10
1393999640676
[]
docs.codehaus.org
This is an iframe, to view it upgrade your browser or enable iframe display. Prev Part IV. After installation This part of the Fedora Installation Guide covers finalizing the installation, as well as some installation-related tasks that you might perform at some time in the future. These include: using a Fedora installation disk to rescue a damaged system. upgrading to a new version of Fedora. removing Fedora from your computer. Table of Contents 16. Firstboot 16.1. License Agreement 16.2. Create User 16.2.1. Authentication Configuration 16.3. Date and Time 16.4. Hardware Profile 17. Your Next Steps 17.1. Updating Your System 17.2. Finishing an Upgrade 17.3. Switching to a Graphical Login 17.3.1. Enabling Access to Software Repositories from the Command Line 17.4. Subscribing to Fedora Announcements and News 17.5. Finding Documentation and Support 17.6. Joining the Fedora Community 18. Basic System Recovery 18.1. Rescue Mode 18.1.1. Common Problems 18.1.2. Booting into Rescue Mode 18.1.3. Booting into Single-User Mode 18.1.4. Booting into Emergency Mode 19. Upgrading Your Current System 19.1. Determining Whether to Upgrade or Re-Install 19.2. Upgrading Your System Prev 15.12. Saving the File Up Chapter 16. Firstboot
http://docs.fedoraproject.org/en-US/Fedora/15/html/Installation_Guide/pt-After_installation.html
2014-03-07T09:50:49
CC-MAIN-2014-10
1393999640676
[]
docs.fedoraproject.org
The view displays all files and folders in the current folder. These items can be accessed or manipulated in different ways: A file or folder can be opened by clicking it with the Open by double-clicking instead is enabled in the System Settings in the → module).mouse button (or double-clicking, if Clicking any item or the white area around the items with themouse button opens a context menu which provides access to many frequently used actions for the item or the current folder, respectively. If the Dolphin view (in another Dolphin window or in the same window if the view is split, see below) to move or copy it or to create a symbolic link. Items can even be dropped in another application to open them in that application.mouse button is pressed on an item, but not immediately released, the item can be dragged and dropped in another folder in the current view or in another Dolphin remembers the history of visited folders. To navigate backward or forward in the history, the corresponding buttons in the toolbar can be used: The and buttons in the toolbar can be used to navigate in the history. If you click with themouse button the item in the history is opened in a new tab thus keeping the current tab with its content. The toolbar contains buttons to control the appearance of the view: The buttons in the toolbar which control the appearance of the view. All the settings discussed below and other options concerning, e.g. the sorting of the files in the current folder, can also be modified in the menu and in the View Display Style dialog. By default, these settings are remembered for each folder separately. This behavior can be changed in the “General” section of the settings. The first three buttons in the above screenshot switch between Dolphin's view modes. folder in a tree-like fashion if Expandable folders are enabled: Each subfolder of the current folder can be “expanded” or “collapsed” by clicking on the > or v icon next to it. All view modes support grouping by the sort type selected in → In all view modes Dolphin shows at least an icon and a name for each item. Using in the menu or the context menu of the header in Details mode, you can select more information for each item to be shown: , , , , or . Depending on the file type, additionally, sorting criteria can be selected: The submenu allows you to select , , , , or . If is enabled, the icons are based on the actual file or folder contents; e.g. for images a scaled down preview of the image is shown. There General section of the settings, a small + or - button appears in the top left corner of the item which is currently hovered over with the mouse. Clicking this sign selects or deselects the item, respectively. If an arrow key, Page Up, Page Down,.
https://docs.kde.org/stable5/en/dolphin/dolphin/dolphin-view.html
2021-10-16T11:30:07
CC-MAIN-2021-43
1634323584567.81
[array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object) array(['toolbar-navigation.png', 'The Back and Forward buttons in the toolbar.'], dtype=object) array(['toolbar-view-appearance.png', 'The buttons in the toolbar which control the appearance of the view.'], dtype=object) array(['grouping-view.png', 'Grouped View'], dtype=object)]
docs.kde.org
# Securing TiddlyWiki on Node.js This guide covers using Pomerium to add authentication and authorization to an instance of TiddlyWiki on NodeJS (opens new window). # What is TiddlyWiki on Node.js TiddlyWiki is a personal wiki and a non-linear notebook for organizing and sharing complex information. It is available in two forms: - a single HTML page - a Node.js application (opens new window) We are using the Node.js application in this guide. # Where Pomerium fits TiddlyWiki allows a simple form of authentication by using authenticated-user-header parameter of listen command (opens new window). Pomerium provides the ability to login with well-known identity providers. # Pre-requisites This guide assumes you have already completed one of the quick start guides, and have a working instance of Pomerium up and running. For purpose of this guide, We will use docker-compose, though any other deployment method would work equally well. # Configure # Pomerium Config jwt_claims_headers: email policy: - from: to: policy: - allow: or: - email: is: [email protected] - email: is: [email protected] # Docker-compose version: "3" services: pomerium: image: pomerium/pomerium:latest volumes: # Use a volume to store ACME certificates - ./config.yaml:/pomerium/config.yaml:ro ports: - 443:443 tiddlywiki_init: image: elasticdog/tiddlywiki:latest volumes: - ./wiki:/tiddlywiki command: ['mywiki', '--init', 'server'] tiddlywiki: image: elasticdog/tiddlywiki:latest ports: - 8080:8080 volumes: - ./wiki:/tiddlywiki command: - mywiki - --listen - host=0.0.0.0 - authenticated-user-header=x-pomerium-claim-email - [email protected] - [email protected] depends_on: - tiddlywiki_init # That's it Navigate to your TiddlyWiki instance (e.g.) and log in: as [email protected]: user can read the wiki, but there is no create new tiddler button is show up. as [email protected]: user can read the wiki and create new tiddlers. as another email: pomerium displays a permission denied error.
https://master.docs.pomerium.io/guides/tiddlywiki
2021-10-16T11:53:21
CC-MAIN-2021-43
1634323584567.81
[]
master.docs.pomerium.io
Model Prediction Store In order to save compute time and prevent repetitive re-computation leading to the same output, TrainLoop utilizes the aitoolbox.torchtrain.train_loop.components.model_prediction_store.ModelPredictionStore which is used for results caching. Especially when using multiple callbacks all executing the same computation, such as making predictions on the validation set this can get quite time consuming. To speed up training process TrainLoop will calculate the prediction on particular dataset as part of the current epoch only once and then cache the predictions. If as part of the same epoch another calculation of predictions on the same data set is requested, the TrainLoop will retrieve the cached results instead of recomputing them again. Currently the ModelPredictionStore supports caching the model loss and model prediction caching on the train, validation and test data sets. As part of the TrainLoop the model prediction store cache lifecycle ends at the end of the epoch. All the cached model outputs are removed at the end of the epoch and the new epoch where the weights of the model will change is started with the clean prediction cache. To most users this caching is visible as part of the TrainLoop’s loss calculation methods: aitoolbox.torchtrain.train_loop.train_loop.TrainLoop.evaluate_loss_on_train_set() aitoolbox.torchtrain.train_loop.train_loop.TrainLoop.evaluate_loss_on_validation_set() aitoolbox.torchtrain.train_loop.train_loop.TrainLoop.evaluate_loss_on_test_set() and as part of the TrainLoop’s model prediction calculation methods: aitoolbox.torchtrain.train_loop.train_loop.TrainLoop.predict_on_train_set() aitoolbox.torchtrain.train_loop.train_loop.TrainLoop.predict_on_validation_set() aitoolbox.torchtrain.train_loop.train_loop.TrainLoop.predict_on_test_set() Important to note here, is that by default TrainLoop will try to save compute time and cache model outputs when possible instead of recomputing them. However, if for a particular use case the user wants to get fresh recomputed loss or model predictions then the force_prediction parameter in any of the model output computation methods listed above has to be switched to True. This will cause them to ignore the cached values and recompute them from scratch.
https://aitoolbox.readthedocs.io/en/latest/torchtrain/adv/model_prediction_store.html
2021-10-16T11:13:54
CC-MAIN-2021-43
1634323584567.81
[]
aitoolbox.readthedocs.io
Caution Buildbot no longer supports Python 2.7 on the Buildbot master. Amazon Web Services Elastic Compute Cloud (“AWS EC2”)¶ EC2 is a web service that allows you to start virtual machines in an Amazon data center. Please see their website for details, including costs. Using the AWS EC2 latent workers. This document will guide you through setup of a AWS EC2 latent worker: Get an AWS EC2 Account¶ To start off, to use the AWS EC2 latent worker, ‘Payment Method’. - Make sure you’re signed up for EC2 by going to worker that connects to your master (to create a buildbot worker, Creating a worker; to make a daemon, Launching the daemons). - You may want to make an instance of the buildbot worker, configure it as a standard worker in the master (i.e., not as a latent worker), and test and debug it that way before you turn it into an AMI and convert to a latent worker in the master. - In order to avoid extra costs in case of master failure, you should configure the worker of the AMI with maxretriesoption (see Worker Options) Also see example systemd unit file example Configure the Master with an EC2LatentWorker¶ Now let’s assume you have an AMI that should work with the EC2LatentWorker.. - On the page, you’ll see alphanumeric values for “Your Access Key Id:” and “Your Secret Access Key:”. Make a note of these. Later on, we’ll call the first one your identifierand the second one your secret_identifier. When creating an EC2LatentWorker in the buildbot master configuration, the first three arguments are required. The name and password are the first two arguments, and work the same as with normal worker. It specifies all necessary remaining values explicitly in the instantiation. from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', ami='ami-12345', identifier='publickey', secret_identifier='privatekey' keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', ) ]. Buildbot supports the standard AWS credentials file. You can then make the access privileges stricter for this separate file, and potentially let more people read your main configuration file. If your master is running in EC2, you can also use IAM roles for EC2 to delegate permissions. keypair_name and security_name allow you to specify different names for these AWS EC2 values. You can make an .aws directory in the home folder of the user running the buildbot master. In that directory, create a file called credentials. The format of the file should be as follows, replacing identifier and secret_identifier with the credentials obtained before. [default] aws_access_key_id = identifier aws_secret_access_key = secret_identifier If you are using IAM roles, no config file is required. Then you can instantiate the worker as follows. from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', ami='ami-12345', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', ) ].plugins import worker bot1 = worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', valid_ami_owners=[11111111111, 22222222222], identifier='publickey', secret_identifier='privatekey', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', ) The other available filter is to provide a regular expression string that will be matched against each AMI’s location (the S3 bucket and manifest name).', ) The regular expression can specify a group, which will be preferred for the sorting. Only the first group is used; subsequent groups are ignored.', ) If the group can be cast to an integer, it will be. This allows 10 to sort after 1, for instance. from buildbot.plugins import worker bot1 = worker.EC2LatentWorker( 'bot1', 'sekrit', 'm1.large', valid_ami_location_regex=r'buildbot\-.*\-(\d+)/image.manifest.xml', identifier='publickey', secret_identifier='privatekey', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', ) In addition to using the password as a handshake between the master and the worker, you may want to use a firewall to assert that only machines from a specific IP can connect as workers. This is possible with AWS EC2 by using the Elastic IP feature. To configure, generate a Elastic IP in AWS, and then specify it in your configuration using the elastic_ip argument. from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', 'ami-12345', identifier='publickey', secret_identifier='privatekey', elastic_ip='208.77.188.166', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', ) ] One other way to configure a worker is by settings AWS tags. They can for example be used to have a more restrictive security IAM policy. To get Buildbot to tag the latent worker specify the tag keys and values in your configuration using the tags argument. from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', 'ami-12345', identifier='publickey', secret_identifier='privatekey', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', tags={'SomeTag': 'foo'}) ] If the worker needs access to additional AWS resources, you can also enable your workers to access them via an EC2 instance profile. To use this capability, you must first create an instance profile separately in AWS. Then specify its name on EC2LatentWorker via instance_profile_name. from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', ami='ami-12345', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', instance_profile_name='my_profile' ) ] You may also supply your own boto3.Session object to allow for more flexible session options (ex. cross-account) To use this capability, you must first create a boto3.Session object. Then provide it to EC2LatentWorker via session argument. import boto3 from buildbot.plugins import worker session = boto3.session.Session() c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', ami='ami-12345', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', session=session ) ] The EC2LatentWorker supports all other configuration from the standard Worker. The missing_timeout and notify_on_missing specify how long to wait for an EC2 instance to attach before considering the attempt to have failed, and email addresses to alert, respectively. missing_timeout defaults to 20 minutes. Volumes¶ If you want to attach existing volumes to an ec2 latent worker, use the volumes attribute. This mechanism can be valuable if you want to maintain state on a conceptual worker across multiple start/terminate sequences. volumes expects a list of (volume_id, mount_point) tuples to attempt attaching when your instance has been created. If you want to attach new ephemeral volumes, use the the block_device_map attribute. This follows the AWS API syntax, essentially acting as a passthrough. The only distinction is that the volumes default to deleting on termination to avoid leaking volume resources when workers are terminated. See boto documentation for further details. from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', ami='ami-12345', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', block_device_map= [ "DeviceName": "/dev/xvdb", "Ebs" : { "VolumeType": "io1", "Iops": 1000, "VolumeSize": 100 } ] ) ] VPC Support¶ If you are managing workers within a VPC, your worker configuration must be modified from above. You must specify the id of the subnet where you want your worker placed. You must also specify security groups created within your VPC as opposed to classic EC2 security groups. This can be done by passing the ids of the vpc security groups. Note, when using a VPC, you can not specify classic EC2 security groups (as specified by security_name). from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', ami='ami-12345', keypair_name='latent_buildbot_worker', subnet_id='subnet-12345', security_group_ids=['sg-12345','sg-67890'] ) ] Spot instances¶ If you would prefer to use spot instances for running your builds, you can accomplish that by passing in a True value to the spot_instance parameter to the EC2LatentWorker constructor. Additionally, you may want to specify max_spot_price and price_multiplier in order to limit your builds’ budget consumption. from buildbot.plugins import worker c['workers'] = [ worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large', 'ami-12345', region='us-west-2', identifier='publickey', secret_identifier='privatekey', elastic_ip='208.77.188.166', keypair_name='latent_buildbot_worker', security_name='latent_buildbot_worker', placement='b', spot_instance=True, max_spot_price=0.09, price_multiplier=1.15, product_description='Linux/UNIX') ] This example would attempt to create a m1.large spot instance in the us-west-2b region costing no more than $0.09/hour. The spot prices for ‘Linux/UNIX’ spot instances in that region over the last 24 hours will be averaged and multiplied by the price_multiplier parameter, then a spot request will be sent to Amazon with the above details. If the multiple exceeds the max_spot_price, the bid price will be the max_spot_price. Either max_spot_price or price_multiplier, but not both, may be None. If price_multiplier is None, then no historical price information is retrieved; the bid price is simply the specified max_spot_price. If the max_spot_price is None, then the multiple of the historical average spot prices is used as the bid price with no limit.
https://docs.buildbot.net/2.1.0/manual/configuration/workers-ec2.html
2021-10-16T11:51:09
CC-MAIN-2021-43
1634323584567.81
[]
docs.buildbot.net
The Fulfillment flow exports shipping and fulfillment information from NetSuite and saves it in Shopify. As soon as you enter fulfillment details in your NetSuite account and run the flow, a request is made by Celigo’s iPaaS platform integrator.io to fetch all the fulfillment and shipping associated details from NetSuite, which then makes a request Shopify API’s to transfer information pertaining to an order as defined in the Field Mappings option associated with the Fulfillment flow. The information exported to Shopify includes data such as Shipping Method, fulfillment quantity to Shopify, and tracking number. The Integration App supports both complete and partial shipments and exports all tracking number data regardless of the shipping carrier you use (USPS, UPS, Fedex, etc.). The NetSuite Fulfillment to Shopify Fulfillment Add flow is a batch data flow. Items fulfilled in NetSuite are exported to Shopify in batch flows and can be configured to transfer data in scheduled intervals. This flow can be made to run from every 15 mins to once every week. You can check the status of the export in the integrator.io dashboard. Note: Integration App only supports creating new fulfillments in Shopify. Updating an existing fulfillment in Shopify from NetSuite is not supported by the Integration App. The following flow diagram displays the information flow between Shopify and NetSuite: Pre-requisite settings to run the fulfillment flow The following are the recommended configurations and settings that must be completed in Shopify-NetSuite Integration App before you execute the fulfillment flow: - From Settings, select the appropriate saved search and click Save. - NetSuite Fulfillment to Shopify Fulfillment Add flow to enable it for retrieving the necessary fulfillment and shipping information from NetSuite. - Verify, update, and add field mappings as per your requirements. For more information on field mapping, see Field Mappings. - To associate fulfillment with a specific location and to know more about the multi-location feature, see Support for Shopify’s Multi-location inventory. Steps to run the Fulfillment flow The following steps capture the fulfillment flow to export shipping and fulfillment information from NetSuite to Shopify: - Open the order you wish to fulfill in NetSuite. You can use the Shopify order ID to find the order in NetSuite. In the following steps, we are using the Shopify order ID: 1030 to demonstrate the flow. From the Sales Order page, Click Fulfill. The Item Shipment page is displayed. In the Items tab, enter the required values in the Quantity and Location fields. In the Packages tab, enter the required values under the LBS, PACKAGE CONTENTS DESCRIPTION, and PACKAGE TRACKING NUMBER columns. This is an optional step. Enter the required information in other tabs and Click Save. The successful transaction message is displayed. From integrator.io, go to the Flows > Fulfillment section and click the run button against the NetSuite Fulfillment to Shopify Fulfillment Add flow. The Dashboard page is displayed. You can view the status of the flow on the Dashboard page. It takes a few minutes for the export flow to succeed. When the flow run completes successfully, the status of the flow is displayed as ‘Completed’. Go to Shopify Store and view the status of the order. The Shopify Order section shows the order fulfillment status as ‘Fulfilled’. - Click on the order ID, the Order details page is displayed with the fulfillment information added in NetSuite for the order. Additional features supported by the fulfillment flow - Invoking Customer Notification functionality provided by Shopify Whenever you fulfill an item in NetSuite and export the details in Shopify, our Integration App invokes the feature from Shopify that sends automatic emails to the customer informing him about the status of the order. The following is an example of such an automatic notification sent to a customer by Shopify when you enter the fulfillment details in NetSuite and the Integration App transfer that information to Shopify: This feature is enabled by default. If you do not wish to send notifications to your customers, you can deactivate this feature using the following steps: - Click the Field Mapping icon. The Mappings page is displayed. - Click the Settings gear-like icon corresponding to the fulfillment.notify_customer field. - On the settings select the Field Mapping Type > Hard-Coded, select Use custom value then enter “True” ” and click Save. - Multiple Tracking ID support The NetSuite-Shopify Integration App supports exporting multiple tracking IDs from NetSuite to Shopify which can be associated with an individual order. For more information, refer to the Steps to run the Fulfillment flow section. In this section, the order used as an example contains multiple tracking ids mentioned in NetSuite, which are exported to Shopify. - Pick, Pack, and Ship feature in NetSuite The NetSuite-Shopify Integration App supports exporting the fulfillment and shipment information regardless of the Pick, Pack, and Ship feature is enabled or disabled in NetSuite. If the Pick, Pack, and Ship feature are enabled, the Integration App only transfers the fulfillment and shipment information to Shopify when the item shipment status of a line item in an order is Shipped. You can enable or disable this feature in your NetSuite account by following the below instructions: Setup > Company > Enable feature > Shipping & Receiving > PICK, PACK, AND SHIP - Partial Fulfillment for Multiple Line Item When an order is placed with multiple line items, the Integration App provides you with the option to partially fulfill an order at one time and fulfill the remaining order at some other time. For example, consider an order which is placed for SKU#1 and SKU#2. For such an order, if you only want to fulfill an order for SKU#1 you can fulfill the same by adding necessary details in NetSuite. After which these details are exported to Shopify. After exporting the details of SKU#1, when you provide fulfillment details for SKU#2 in NetSuite, the Integration App will export fulfillment details for SKU#2 to Shopify. - Partial fulfillment for a line item with multiple quantities When an order is placed for a line item with multiple quantities, the Integration App provides you with the option to partially fulfill that specific line item at one time and fulfill the remaining quantities of the line item at some other time. For example, consider an order which is placed for SKU#1 with quantity 10. For such an order, if you initially want to send 5 quantities of SKU#1 to your customer, you can fulfill the same by adding necessary details in NetSuite. After which these details are exported to Shopify. After exporting the details of 5 quantities SKU#1, when you provide fulfillment details for the remaining 5 quantities in NetSuite, the Integration App will export fulfillment details for SKU#1 to Shopify. How to update tracking ID for Fulfillment record already exported from NetSuite to Shopify You cannot update the information in the Fulfillment record, like Quantity, Location, Carrier, Tracking ID, which has already been exported From NetSuite. Though this information cannot be edited in NetSuite, Shopify does provide you with an option to modify the Carrier and Tracking information associated with an order. To update the tracking id and carrier information and notify the customer about the same, you can use the steps as follows: - Find the order in Shopify and click the order ID to access the order details page for the order pertaining to which you wish to update Carrier and tracking information. - On the order details page, find the Fulfillments section and click the More drop-down list, and select Edit Tracking. The Edit Tracking window is displayed. - Enter the new Tracking Number and Carrier details and click Save. - To send a notification to the customer about the updates you have made, select the Send notification email to customer checkbox. After the Order is successfully fulfilled, the NetSuite record displays the Shopify record details such as” ETail Channel (Shopify) and ETail Order ID. The ETAIL ORDER FULFILLMENT EXPORTED Checkbox is shown as checked. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/228382748
2021-10-16T12:17:32
CC-MAIN-2021-43
1634323584567.81
[array(['/hc/en-us/article_attachments/215431228/7.png', None], dtype=object) ]
docs.celigo.com
Objective To set up a Chef Server load balancer server or servers in a public or private cloud environment. Prerequisites - You must log in under a RightScale account with actorand libraryuser roles in order to complete the tutorial. - For Amazon EC2. - We strongly recommend that you set up credentials for password values and any other sensitive data included as Chef recipe inputs. Overview This tutorial describes the steps for launching one Chef server in the cloud, using the Chef Server for Linux (RightLink 10) ServerTemplate. Create Credentials Create Elastic IPs (AWS only) If you are launching Chef servers in EC2, it is recommended that you use Elastic IPs. If you haven't already done so, create an Elastic IP for the Chef Server. Be sure to create the Elastic IPs in the AWS region (e.g. 'us-east') where you intend to launch the load balancer servers. See Create Elastic IPs (EIP). Steps Add a Server Follow these steps to add a load balancer server to the deployment. - Go to the MultiCloud Marketplace (Design > MultiCloud Marketplace > ServerTemplates) and import the most recently published revision of the Chef Server for Linux (RightLink 10) ServerTemplate into the RightScale account. - From the imported ServerTemplate's show page, click the Add Server button. - Select the cloud for which you will configure a server. - Select the deployment into which the new server will be placed. - Next, the Add Server Assistant wizard will walk you through the remaining steps that are required to create a server based on the selected cloud. - Server Name - Provide a nickname for your new load balancer server (e.g., lb1). - Select the appropriate cloud-specific resources that are required in order to launch a server into the chosen cloud. The required cloud resources may differ depending on the type of cloud infrastructure. If the cloud supports multiple datacenters / zones, select a specific zone. Later, when you create the other load balancer server you will use a different datacenter / zone to ensure high-availability. For more information, see Add Server Assistant. - If you are using Elastic IPs (AWS EC2 only), select an existing Elastic IP from the drop-down, or click New to create a new one. - Click Confirm, review the server's configuration and click Finish to create the server. Configure Inputs Inheritance of Inputs.. Chef Server Inputs Launch the Server - Go to the deployment's Servers tab and launch all of the load balancer servers. When you view the input confirmation page, there should not be any required inputs with missing values. If there are any required inputs that are missing values (highlighted in red) at the input confirmation page, cancel the launch and add values for those inputs at the deployment level before launching the server again. Refer to the instructions in Launch a Server if you are not familiar with this process. Configure DNS Records If you are using Elastic IPs or already know the public IP addresses that will be used by the load balancer servers, you might have already set up the DNS records for the Chef Server.. chef-server.example.com) that points to its public IP address. The DNS records for the Chef sServer should direct traffic from the associated hostname (FQDN) (e.g. chef-server.example.com) to the application servers in its load balancing pool. Next Steps Once your server is operational you can configure your new Chef Server. Review the documents below to guide you if you are not familiar with the Chef Server configuration. - Setting up your client - Uploading cookbooks - Backup server and schedule backups
https://docs.rightscale.com/st/rl10/chef-server/tutorial.html
2021-10-16T11:58:38
CC-MAIN-2021-43
1634323584567.81
[]
docs.rightscale.com
Starburst for data consumers # If you champion data-driven decisions in your org, Starburst has the tools to connect you to the data you need. Starburst brings all your data together in a single, federated environment. No more waiting for data engineering to develop complicated ETL. The data universe is in your hands! Starburst Enterprise is a distributed SQL query engine. Maybe you know a single variant of SQL, or maybe you know a few. Starburst’s SQL is ANSI-compliant and should feel comfortable and familiar. It takes care of translating your queries to the correct SQL syntax for your data source. All you need to access all your data from a myriad of sources is a single JDBC or ODBC client in most cases, depending on your toolkit. Whether you are a data scientist or analyst delivering critical insights to the business, or a developer building data-driven applications, you’ll find you can easily query across multiple data sources, in a single query. Fast. How does this work? # Data platforms in your organization such as Snowflake, Postgres, and Hive are defined by data engineers as catalogs. Catalogs, in turn, define schemas and their tables. Depending on the data access controls in place, discovering what data catalogs are available to you across all of your data platforms can be easy! Even through a CLI, it’s a single, simple query to get you started with your federated data: presto> SHOW CATALOGS; Catalog --------- hive_sales mysql_crm (2 rows) After that, you can easily explore schemas in a catalog with the familiar SHOW SCHEMAS command: presto> SHOW SCHEMAS FROM hive_sales LIKE `%rder%`; Schema --------- order_entries customer_orders (2 rows) From there, you can of course see the tables you might want to query: presto> SHOW TABLES FROM order_entries; Table ------- orders order_items (2 rows) You might notice that even though you know from experience that some of your data is in MySQL and others in Hive, they all show up in the unified SHOW CATALOGS results. From here, you can simply join the data sources from different platforms as if they were from different tables. You just need to use their fully qualified names:; How do I get started? # The first order of business is to get the latest Starburst JDBC or ODBC driver and get it installed. Note that even though you very likely already have a JDBC or ODBC driver installed for your work, you do need the Starburst-specific driver. Be careful not to install either in the same directory with other JDBC or ODBC drivers! If your data ops group has not already given you the required connection information, reach out to them for the following: - the JDBC URL - jdbc:presto://example.net:8080 - whether your org is using SSL to connect - the type of authentication your org is using - username or LDAP When you have that info and your driver is installed, you are ready to connect. What kind of tools can I use? # More than likely, you can use all your current favorite client tools, and even ones on your wishlist with the help of our tips and instructions. How do I migrate my data sources to Starburst? # In some cases, this is as easy as changing the sources in your FROM clauses. For some queries there could be slight differences between your data sources’ native SQL and SQL, so some minor query editing is required. Rather than changing these production queries on the fly, we suggest using your favorite SQL client or our own CLI to test your existing queries before making changes to production. If you are migrating from Hive, we have a migration guide in our documentation. To help you learn how others have made the switch, here is a handy walk-through of using Looker and Starburst Enterprise together. Where can I learn more about Starburst? # From our documentation, of course! Visit our data consumer’s user guide. Is the information on this page helpful? Yes No
https://docs.starburst.io/data-consumer/introduction.html
2021-10-16T11:43:05
CC-MAIN-2021-43
1634323584567.81
[]
docs.starburst.io
An IfcProjectLibrary collects all library elements that are included within a referenced project data set. Examples for project libraries include: The inherited attributes RepresentationContext and UnitsInContext have the following meaning: NOTE It is generally discouraged to use a different length measure and plane angle measure in an included project library compared with the project itself. It may lead to unexpected results for the shape representation of items included in. HISTORY New entity in IFC4. Instance diagram Libraries of components standardized by DOT agencies may be referenced from external locations and encapsulated within project libraries. Such components may include DOT-standardized assemblies for piers, abutments, and bridge decks, as well as more general-purpose shapes such as AISC steel shapes and ACI rebar bending types. As units may vary between components, each library may define its own. Project Declaration The Project Declaration concept applies to this entity. Project Units The Project Units concept applies to this entity. <xs:element <xs:complexType <xs:complexContent> <xs:extension </xs:complexContent> </xs:complexType> ENTITY IfcProjectLibrary SUBTYPE OF (IfcContext); END_ENTITY; Link to this page
http://docs.buildingsmartalliance.org/IFC4x2_Bridge/schema/ifckernel/lexical/ifcprojectlibrary.htm
2021-10-16T11:19:29
CC-MAIN-2021-43
1634323584567.81
[]
docs.buildingsmartalliance.org
Pipeline A pipeline is a unidirectional flow that defines the order in which changes will be migrated, starting with the development orgs and finishing in a production org: You can create a pipeline using the wizard in the Pipeline Manager. Check out the article Creating Pipelines for more information about how to configure a new pipeline. The Pipeline record includes different fields that provide relevant information about your pipeline, such as the repository or the main branch used in the pipeline, as well as fields that enable you to further customize your pipeline and your DevOps process for that particular pipeline. Let’s take a look at the Pipeline record and dig deeper into the fields this record includes: Relevant Fields Tabs - Projects: In this tab, you can link the record of the project associated with the pipeline. - System Properties: If you are working with a non-Salesforce pipeline, you may need to provide some additional values that Copado will reference during the promotion and deployment processes. You can add these values to the System Properties tab.
https://docs.copado.com/article/ckxswevdln-pipeline
2021-10-16T12:11:23
CC-MAIN-2021-43
1634323584567.81
[array(['https://files.helpdocs.io/U8pXPShac2/articles/ckxswevdln/1607690340118/new-pipeline-copia.png', None], dtype=object) array(['https://files.helpdocs.io/U8pXPShac2/articles/ckxswevdln/1624291766706/pipeline-record.png', 'Pipeline record'], dtype=object) ]
docs.copado.com
sockets.. It is recommended to use the server in a with statement. Then call the handle_request() or serve_forever() method of the server object to process one or many requests. Finally, call server_close() to close the socket (unless you used a with statement)..(). socketserver.ForkingMixIn.server_close()waits until all child processes complete, except if socketserver.ForkingMixIn.block_on_closeattribute is false. socketserver.ThreadingMixIn.server_close()waits until all non-daemon threads complete, except if socketserver.ThreadingMixIn.block_on_closeattribute is false. Use daemonic threads by setting ThreadingMixIn.daemon_threadsto Trueto not wait until threads complete. Modifié dans la version 3.7: socketserver.ForkingMixIn.server_close()and socketserver.ThreadingMixIn.server_close()now waits until all child processes and non-daemonic threads complete. Add a new socketserver.ForkingMixIn.block_on_closeclass attribute to opt-in for the pre-3.7 behaviour. - class socketserver. ForkingTCPServer¶ - class socketserver. ForkingUDPServer¶ - class socketserver. ThreadingTCPServer¶ - class socketserver. ThreadingUDPServer¶ These classes are pre-defined using the mix-in classes.ors. Objets Serveur. Modifié dans la version 3.3: Added service_actionscall to the serve_forevermethod. service_actions()¶ This is called in the serve_forever()loop. This method can be overridden by subclasses or mixin classes to perform actions specific to a given service, such as cleanup actions. Nouveau dans la version 3.3. shutdown()¶ Tell the serve_forever()loop to stop and wait until it does. shutdown()must be called while serve_forever()is running in a different thread otherwise it will deadlock.. finish_request(request, client_address. Modifié dans la version 3.6: Support for the context manager protocol was added. Exiting the context manager is equivalent to calling server_close().. The rfileattributes of both classes support the io.BufferedIOBasereadable interface, and DatagramRequestHandler.wfilesupports the io.BufferedIOBasewritable interface. Modifié dans la version 3.6: StreamRequestHandler.wfilealso supports the io.BufferedIOBasewritable interface. Exemples¶ socketserver.TCPServer Example¶ This is the server side: import socketserver class MyTCPHandler(socketserver.BaseRequestHandler): """ The request handler: Serveur : $ socketserver.UDPServer Example¶ with socketserver.UDPServer((HOST, PORT), MyUDPHandler) as server:. Asynchronous Mixins¶)::. Available only on POSIX platforms that support fork().
https://docs.python.org/fr/3.8/library/socketserver.html
2021-10-16T12:40:50
CC-MAIN-2021-43
1634323584567.81
[]
docs.python.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the ModifySnapshotAttribute operation. Adds or removes permission settings for the specified snapshot. You may add or remove specified Amazon Web Services account IDs from a snapshot's list of create volume permissions, but you cannot do both in a single operation. If you need to both add and remove account IDs for a snapshot, you must use multiple operations. You can make up to 500 modifications to a snapshot in a single operation. Encrypted snapshots and snapshots with Amazon Web Services Marketplace product codes cannot be made public. Snapshots encrypted with your default KMS key cannot be shared with other accounts. For more information about modifying snapshot permissions, see Share a snapshot in the Amazon Elastic Compute Cloud User Guide. Namespace: Amazon.EC2.Model Assembly: AWSSDK.EC2.dll Version: 3.x.y.z The ModifySnapshotAttributeRequest type exposes the following members This example modifies snapshot ``snap-1234567890abcdef0`` to remove the create volume permission for a user with the account ID ``123456789012``. If the command succeeds, no output is returned. var client = new AmazonEC2Client(); var response = client.ModifySnapshotAttribute(new ModifySnapshotAttributeRequest { Attribute = "createVolumePermission", OperationType = "remove", SnapshotId = "snap-1234567890abcdef0", UserIds = new List<string> { "123456789012" } }); This example makes the snapshot ``snap-1234567890abcdef0`` public. var client = new AmazonEC2Client(); var response = client.ModifySnapshotAttribute(new ModifySnapshotAttributeRequest { Attribute = "createVolumePermission", GroupNames = new List<string> { "all" }, OperationType = "add", SnapshotId = "snap-1234567890abcdef0" }); .NET Core App: Supported in: 3.1 .NET Standard: Supported in: 2.0 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EC2/TModifySnapshotAttributeRequest.html
2021-10-16T12:11:33
CC-MAIN-2021-43
1634323584567.81
[]
docs.aws.amazon.com
Difference between revisions of "TS-4710" Revision as of 17:07, 7 August 2013 1 Overview The TS-4710 is a TS-Socket Macrocontroller Computer on Module based on the TS-4700 with a revised FPGA to CPU interface,-4710 includes the items that are commonly necessary for development with the TS-4710. The other options include: 2.2 Booting up the board Using one of the "off the shelf" baseboards, be sure to refer to that baseboard's manual here. Different baseboards use different power connectors, voltage ranges, and may have different power requirements. The macrocontroller only requires a 5V rail from the baseboard which may be regulated from other voltage ranges. Refer to the #TS-Socket Connector section for the POWER pins. While operating the board will typically idle at around 350mA@5V with the PXA166 or 450mA with the PXA168, but this can vary, and larger cards can consume more. A typical power supply for just the macrocontroller will allow around 1A, but a larger power supply may be needed depending on your peripherals. (ttyS0) is a TTL UART at 115200 baud, 8n1 (8 data bits 1 stop bit), and no flow control. On the macrocontroller this is CN2_93 (TX), CN2_95 (RX). Various baseboards bring this out using different methods. The TS-8500 and TS-8200 baseboards bring out a DB9 connector with the console as RS232. Other baseboards have a jumper to switch between the console port and another serial port. Some baseboards require an adapter board like the TS-9449. Refer to the baseboard model you are using [Main_Page#Baseboards|here]] for more information on any specific jumpers or ports to connect to for console..4 Initramfs. 3.5 Creating a Custom Startup Splash The default image includes a splash image that displays the TS logo. You can replace this with your own logo by replacing the files in /ts/splash/, or disable the splash screen by removing these files. The fbsplash utility that displays the splash logo in a ppm format. You can use graphics applications such as Gimp which can export to ppm, or you can use imagemagick in Linux to convert another file to ppm: convert splash.png splash.ppm The image resolution usually should match the screen, but otherwise it will be aligned to the upper left corner. If the system is configured to automatically boot to Debian it will display the splash screen until X11 is started. Graphical Development For drawing interfaces in linux there are a few options. To speak at the lower levels, you can use DirectFB or X11. If you want to draw a simple user interface at a much higher level you should use a graphical toolkit as listed below. Linux has 3 major toolkits used for developing interfaces. These include QT, GTK, and WxWidgets. For development you may want to build the interface on your desktop PC, and then connect with any specific hardware functionality when it runs on the board. You should also be aware of the versions of GTK, QT, and WX widgets available in the current provided distribution as their APIs can all change significantly between versions. These examples below should help get you started in compiling a graphical hello world application, but for further development you will need to refer to the documentation for the specific toolkits. Development environment available for Windows, Linux, and Mac. The most common utility used is QT Creator which includes the IDE, UI designer, GDB, VCS, a help system, as well as integration with their own build system. See QT's documentation for a complete list of features. QT can connect with our cross compilers. If you are working with Linux you can use the same cross compiler and connect it with qtcreator. QT also offers professional training from their website. QT has a large range of supported language bindings, but is natively written with C++. Hello world example Install the build dependencies # Make sure you have a valid network connection # This will take a while to download and install. apt-get update && apt-get install libqt4-dev qt4-dev-tools build-essential -y For deployment you only need the runtime libraries. These are divided up by functionality, so use 'apt-cache search' to find the necessary qt4 modules for your project. You can also use the 'libqt4-dev' for deployment, it just may contain more than you need. This simple hello world app resizes the window when you press the button. 'qtexample.cpp' #include <QApplication> #include <QPushButton> int main(int argc, char *argv[]) { QApplication app(argc, argv); QPushButton hello("Hello world!"); hello.resize(100, 30); hello.show(); return app.exec(); } To compile it: # Generate the project file qmake -project # generate a Makefile qmake # build it (will take approximately 25 seconds) make This will create the project named after the directory you are in. In this example I'm in /root/ so the binary is 'root'. # DISPLAY is not defined from the serial console # but you do not need to specify it if running # xterm on the display. DISPLAY=:0 ./root GTK Development is possible on Windows, Linux, and Mac, but will be significantly easier if done from Linux. Typically you would use the Anjuta IDE which includes IDE, UI designer (GtkBuilder/glade), GDB, VCS, devhelp, and integration with the autotools as a built system. This is only available for Linux. GTK also has a large range of supported bindings, though is natively written in C. Hello world example Install the build dependencies # Make sure you have a valid network connection # This will take a while to download and install. apt-get update && apt-get install libgtk2.0-dev pkg-config build-essential -y For deployment you only need the runtime library 'libgtk2.0-0'. The below example will echo to the terminal the application is run from every time you press the button. main.c #include <gtk/gtk.h> static void hello_cb(GtkWidget *widget, gpointer data) { g_print ("Hello World\n"); } int main(int argc, char *argv[]) { GtkWidget *window; GtkWidget *button; gtk_init(&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); button = gtk_button_new_with_label("Hello World"); g_signal_connect(button, "clicked", G_CALLBACK(hello_cb), NULL); gtk_container_add(GTK_CONTAINER(window), button); gtk_widget_show_all(window); gtk_main(); return 0; } To compile this: gcc main.c -o test `pkg-config --cflags --libs gtk+-2.0` To run this example: # DISPLAY is not defined from the serial console # but you do not need to specify it if running # xterm on the display. DISPLAY=:0 ./test Hello world tutorial. This uses the simplest example as it does not use any interface design and creates all widgets from code. Micah Carrick's GTK/glade tutorial. This will show you how to use the Glade designer (integrated with Anjuta as well) to create an xml description for your interface, and how to load this in your code. wxWidgets is a cross platform graphics library that uses the native toolkit of whichever platform it is on. It will draw winforms on Windows, and GTK on Linux. While wxWidgets has many tools available for development, Code::Blocks seems the most recommended as it includes wxSmith for designing the user interface, as well as including an IDE and GDB support. The wxWidgets toolkit has some binding support, and is natively written in C++. Hello world example Install the build dependencies # Make sure you have a valid network connection # This will take a while to download and install. apt-get update && apt-get install wx2.8-headers wx2.8-i18n libwxgtk2.8-dev build-essential -y The below example will simply draw a frame that prints 'hello world'. main.cpp #include "wx/wx.h" class HelloWorldApp : public wxApp { public: virtual bool OnInit(); }; DECLARE_APP(HelloWorldApp); } To compile this example: g++ main.cpp `wx-config --cxxflags --libs` -o test To run this example: # DISPLAY is not defined from the serial console # but you do not need to specify it if running # xterm on the display. DISPLAY=:0 ./test 5.3 6 Features 6.1 Software images 6.2 CPU The TS-4710 supports the PXA166 from Marvell's Armada 100 series. The common features will be described in other sections, but for more details see the CPU user guide. 6.3.6 External Reset The external reset pin (DIO 9) will reset the CPU by default when it is low. You can disable this functionality to use this as a DIO by running: tshwctl --resetswitchoff This can be disabled with the CFG_RESETSW_EN=0 option in the #Initramfs. 6.8.9.11 LCD Interface This interface presents a standard 24 bit LCD video output. The Linux operating system we provide includes drivers for the framebuffer device and X11 support. If you are using our displays the driver is typically set up in the init-xorgenv file in the initrd which will detect which display you are using and set up the resolution accordingly. See the #Graphical Development section of the manual for more details on examples on drawing to this interface. For the specifics of this interface for custom baseboard implementations please refer to the CPU manual. 6.12 Touchscreen Backlight Control A PWM signal on this line is used to control the brightness of the LCD backlight. In the ts4700.subr file we implement several commands for controlling this backlight. backlight_on() backlight_off() backlight_low() backlight_medium() backlight_high() See #DIO for more information on MFP_85 and the CPU GPIO. 6.13.14.15 DIO This board uses both CPU and a DIO controller in the FPGA. The CPU DIO typically has 1-7 functions associated with various pins (I2C, PWM, SPI, etc). See the CPU manual. Bit masking: Any bits not expressly mentioned here should be masked out. Direction setting: 0 is input, 1 is output...16.17 USB 6.17.17.18 PCIe The TS-Socket format brings out a PCIe lane which can be used for custom baseboards. This is only available on the PXA168. Our current off-the-shelf designs do not implement this for any peripherals. Refer to the cpu manual for more details on PCIe. 6.19.20 I2S Audio These pins can be connected to an I2S CODEC for an audio output channel. Our default kernel contains a configuration for alsa support using an sgtl5000. See the provided Linux kernel for more information on other supported audio codecs. 6.21 Camera Interface The Marvell processor includes CMOS camera interface which is available on these lines. Please see the CPU manual for more details. 6.22 CPU JTAG. 6.23.24 Video Acceleration Marvell provides patches for the gstreamer library that can be used for video acceleration. There are bindings for many languages if you want to implement video playback in your application. However as an example you can try this out using the 'totem' player. 6.25.25.25.26.27.28.29.29.1 PC104.31. 7 Connectors 7.1 TS-Socket The TS-SOCKET macrocontrollers all use two high density 100 pin connectors for power and all I/O. These follow a common pinout for various external interfaces so new modules can be dropped in to lower power consumption or use a more powerful processor. The male connector is on the baseboard, and the female connector is on the macrocontroller. You can find the datasheet for the baseboard's male connector here. This can be ordered from the TS-Socket macrocontroller product page as CN-TSSOCKET-M-10 for a 10 pack, or CN-TSSOCKET-M-100 for 100 pieces, or from the vendor of your choice, the part is an FCI "61083-102402LF". We have an Eaglecad library available for developing a custom baseboard here. We also provide the entire PCB design for the TS-8200 baseboard here which you can modify for your own design. In our schematics and our table layout below, we refer to pin 1 from the male connector on the baseboard. - ↑ 1.0 1.1 1.2 1.3 The FPGA JTAG pins are not recommended for use and are not supported. See the #FPGA Programming section for the recommended method to reprogram the FPGA. - ↑ EXT_RESET# is an input used to reboot the CPU. Do not drive active high, use open drain. - ↑ This is an output which can be manipulated in the #Syscon. This pin can optionally be connected to control a FET to a separate 5V rail for USB to allow software to reset USB devices. - ↑ OFF_BD_RESET# is an output from the macrocontroller that automatically sends a reset signal when the unit powers up or reboots. It can be connected to any IC on the base board that requires a reset. - ↑ 5.0 5.1 5.2 5.3 The POWER pins should each be provided with a 5V source. - ↑ This defaults to an offboard reset on our carrier boards the 8550, 8500, 8380, 8280, 8290, and 8160. Customer carrier boards can turn on this offboard reset with tshwctl --resetswitchon - ↑ 7.0 7.1 The TS-4710 regulates a 3.3V rail which can source up to 700mA. Designs should target a 300mA max if they intend to use other macrocontrollers. - ↑ This pin is used as a test point to verify the CPU has a correct voltage for debugging - ↑ 9.0 9.1 9.2 9.3. - ↑ 10.0 10.1 10.2 10.3 PCIe is only present on the TS-4710-1066 - ↑ This pin is used as a test point to verify the RAM has a correct voltage for debugging - ↑ This pin is used as a test point for debugging - ↑ This should be supplied with 5V to power the USB ports.) 9 Product Notes..
https://docs.embeddedarm.com/index.php?title=TS-4710&diff=4354&oldid=4256
2021-10-16T11:13:29
CC-MAIN-2021-43
1634323584567.81
[]
docs.embeddedarm.com
Process overview What is a process? A process is a type of workflow that ensures a strict sequential set of steps performed on form data. Flow Admins for a process can set up a form to carry data, and then make a predefined path for it to follow. The system automatically routes the requests through the various steps until the item is complete. Processes are a great fit in places where you would want strict control and efficiency. Common processes examples - Vacation request - Purchase request - Employee onboarding - Budget approval request - Visitor pass request - Vendor enrollment In the leave request process, users can fill out a form to apply for personal and sick leave. Once the form is submitted, it is sent for approval to a manager, sent to HR and/or payroll for processing. When a process is the best flow Processes are ideal when you have to streamline a sequence of steps, performed by people to achieve an objective and it can be modified at any time. It best suits when you want to automate repeatable unstructured tasks in your organization. The form, its fields, and the workflow need to be well defined. Every time the form is initiated, it needs to go through the predefined steps to the associated person for approval. Think you might need a process, case flow, or channel instead? Learn more here.
https://docs.kissflow.com/article/391kswo9al-process-overview
2021-10-16T12:20:44
CC-MAIN-2021-43
1634323584567.81
[array(['https://files.helpdocs.io/vy1bn54mxh/articles/391kswo9al/1562149220134/process.png', None], dtype=object) ]
docs.kissflow.com
Sam Whited 3 August 2017 The day before GopherCon, a group of Go team members and contributors gathered in Denver to discuss and plan for the future of the Go project. This was the first ever event of its kind, a major milestone for the Go project. The event comprised a morning session revolving around focused discussions on a theme, and an afternoon session made up of round table discussions in small break-out groups. The compiler and runtime session started out with a discussion about refactoring gc and related tools into importable packages. This would reduce overhead in the core tools and in IDEs which could embed the compiler themselves to do quick syntax checking. Code could also be compiled entirely in memory, which is useful in environments that don't provide a filesystem, or to run tests continually while you develop to get a live report of breakages. More discussion about whether or not to pursue this line of work will most likely be brought up on the mailing lists in the future. gc There was also a great deal of discussion around bridging the gap between optimized assembly code and Go. Most crypto code in Go is written in assembly for performance reasons; this makes it hard to debug, maintain, and read. Furthermore, once you've ventured into writing assembly, you often can't call back into Go, limiting code reuse. A rewrite in Go would make maintenance easier. Adding processor intrinsics and better support for 128-bit math would improve Go's crypto performance. It was proposed that the new math/bits package coming in 1.9 could be expanded for this purpose. math/bits Not being all that familiar with the development of the compiler and runtime, this for me was one of the more interesting sessions of the day. I learned a lot about the current state of the world, the problems, and where people want to go from here. After a quick update from the dep team on the status of the project, the dependency management session gravitated towards how the Go world will work once dep (or something dep-like) becomes the primary means of package management. Work to make Go easier to get started with and make dep easier to use has already started. In Go 1.8, a default value for GOPATH was introduced, meaning users will only have to add Go's bin directory to their $PATH before they can get started with dep. GOPATH $PATH Another future usability improvement that dep might enable, is allowing Go to work from any directory (not just a workspace in the GOPATH), so that people can use the directory structures and workflows they're used to using with other languages. It may also be possible to make go install easier in the future by guiding users through the process of adding the bin directory to their path, or even automating the process. There are many good options for making the Go tooling easier to use, and discussion will likely continue on the mailing lists. go install The discussions we had around the future of the Go language are mostly covered in Russ Cox's blog post: Toward Go 2, so let's move on to the standard library session. As a contributor to the standard library and subrepos, this session was particularly interesting to me. What goes in the standard library and subrepos, and how much it can change, is a topic that isn't well defined. It can be hard on the Go team to maintain a huge number of packages when they may or may not have anyone with specific expertise in the subject matter. To make critical fixes to packages in the standard library, one must wait 6 months for a new version of Go to ship (or a point release has to be shipped in the case of security issues, which drains team resources). Better dependency management may facilitate the migration of some packages out of the standard library and into their own projects with their own release schedules. There was also some discussion about things that are difficult to achieve with the interfaces in the standard library. For instance, it would be nice if io.Reader accepted a context so that blocking read operations could be canceled. io.Reader More experience reports are necessary before we can determine what will change in the standard library. A language server for editors to use was a hot topic in the tooling session, with a number of people advocating for IDE and tool developers to adopt a common "Go Language Server" to index and display information about code and packages. Microsoft's Language Server Protocol was suggested as a good starting point because of its wide support in editors and IDEs. Jaana Burcu Dogan also discussed her work on distributed tracing and how information about runtime events could be made easier to acquire and attached to traces. Having a standard "counter" API to report statistics was proposed, but specific experience reports from the community will be required before such an API can be designed. The final session of the day was on the contributor experience. The first discussion was all about how the current Gerrit workflow could be made easier for new contributors which has already resulted in improvements to the documentation for several repos, and influenced the new contributors workshop a few days later! Making it easier to find tasks to work on, empowering users to perform gardening tasks on the issue tracker, and making it easier to find reviewers were also considered. Hopefully we'll see improvements to these and many more areas of the contribution process in the coming weeks and months! In the afternoon, participants broke out into smaller groups to have more in-depth discussions about some of the topics from the morning session. These discussions had more specific goals. For example, one group worked on identifying the useful parts of an experience report and a list of existing literature documenting Go user experiences, resulting in the experience report wiki page. Another group considered the future of errors in Go. Many Go users are initially confused by, or don't understand the fact that error is an interface, and it can be difficult to attach more information to errors without masking sentinel errors such as io.EOF. The breakout session discussed specific ways it might be possible to fix some of these issues in upcoming Go releases, but also ways error handling could be improved in Go 2. error io.EOF Outside of the technical discussions, the summit also provided an opportunity for a group of people from all over the world who often talk and work together to meet in person, in many cases for the first time. There is no substitute for a little face-to-face time to build a sense of mutual respect and comradeship, which is critical when a diverse group with different backgrounds and ideas needs to come together to work in a single community. During the breaks, Go team members dispersed themselves among the contributors for discussions both about Go and a little general socialization, which really helped to put faces to the names that review our code every day. As Russ discussed in Toward Go 2, communicating effectively requires knowing your audience. Having a broad sample of Go contributors in a room together helped us all to understand the Go audience better and start many productive discussions about the future of Go. Going forward, we hope to do more frequent events like this to facilitate discourse and a sense of community. Photos by Steve Francia
https://docs.studygolang.com/blog/contributors-summit
2021-10-16T13:06:52
CC-MAIN-2021-43
1634323584567.81
[]
docs.studygolang.com
Difference between revisions of "Updating MediaWiki" Revision as of 23:32, 3 August. In this guide, we assume you are familiar with the files on your ULYSSIS account. If you don't know how to access these files, please read Accessing your files first. Downloading the latest version To start updating MediaWiki, you will need to download the version you want to update to. If you arrived at this page after receiving an email from our Software Version Checker, follow the instructions in the next paragraph. Otherwise, you can skip the next paragraph. The email you received the the official table. In this table, currently supported versions are in bold. Click on the link of the version you want to download (if you need to choose a version, make sure to choose a supported, newer, preferably LTS version). This will redirect you to a page with information about this version. The first paragraph on this page contains a link to mediawiki-xxx.tar.gz. Download this file and save it somewhere on your PC. this directory to www/wiki_old, or something similar. Installing the new files Now, you will need to upload the mediawiki-xxx.tar.gz file you downloaded in step 1 next to the old installation directory. For example, if your old installation directory is located in www/wiki_old, the old installation. .htaccessfile, if present. Make sure you can view hidden files; to enable this for Cyberduck, you can look at Accessing your files. Updating extensions If you use any extensions that are not bundled with MediaWiki by default, you should update them too. For example, you might have the ULYSSIS extensions MediaWikiShibboleth or CompressUploads installed.. This script can be executed using the Cyberduck "Send Command" feature. Enter the following command in the pop-up box: php <wiki installation location>/maintenance/update.php For example, if your wiki is located at www/wiki, the command should be as follows: After pressing "Send", the command will be executed on the server. If everything went well, you should see a lot of output, ending with something like: Congratulations! You successfully updated MediaWiki. Still, there are two more important steps you must perform: - Test your new MediaWiki installation: make sure all basic functionality (viewing, editing pages, file upload) works and all your extensions function properly. - Delete the old installation: for example, if this is stored in www/wiki_old,.
https://docs.ulyssis.org/index.php?title=Updating_MediaWiki&diff=1481&oldid=1479
2021-10-16T11:50:15
CC-MAIN-2021-43
1634323584567.81
[]
docs.ulyssis.org
Syntax: continue [on | off] FTP Command Index The CONTINUE command instructs Reflection FTP to ignore errors that occur during a wildcard file transfer initiated at the FTP command line. File transfer proceed as though no error occurred, until all files satisfying the wildcard specification have been transferred. CONTINUE with no arguments tells Reflection FTP to ignore an error in the next MGET or MPUT command only. The CONTINUE command only applies to the series of commands that comprise an MGET or MPUT block (such as LIST, GET, PUT, CD). If an error is encountered in any of the commands in the series, the script will stop after it finishes the complete MGET or MPUT command series. To allow the script to process further commands, change SET-ABORT-ON-ERROR to NO. The CONTINUE command does not apply to drag-and-drop file operations. on Tells Reflection to ignore all file transfer errors, as if every subsequent MGET and MPUT command were preceded by a CONTINUE. off Reverses the ON option. This sequence of commands instructs Reflection FTP to ignore any error in the next MPUT command. Without CONTINUE, the MPUT command aborts if an error occurs during the transfer. CONTINUE MPUT ACCT*.TXT See ABORT-ON-ERROR Script Sample for an additional example.
https://docs.attachmate.com/Reflection/2008/R1/Guide/pt/user-html/7480.htm
2021-10-16T12:26:14
CC-MAIN-2021-43
1634323584567.81
[]
docs.attachmate.com
ssh-keygen - Creation, management, and conversion of keys used for client and server authentication. ssh-keygen [-b bits] -t type [-N new. Requests a change of the comment in the private and public key files. This operation is only supported for RSA1 keys. The program will prompt for the file containing the private keys, for the passphrase if the key has one, and for the new comment.. Uses the specified private key to derive a new copy of the public key. You can specify the key file using -f. If you don't specify a file, you are queried for a file name. ssh-keygen returns 0 (zero) if the command completes successfully. Any non-zero value indicates a failure.
https://docs.attachmate.com/Reflection/2008/R1/Guide/pt/user-html/ssh-keygen_command_rf.htm
2021-10-16T10:57:59
CC-MAIN-2021-43
1634323584567.81
[]
docs.attachmate.com
Column Button The column button is displayed if both the column header panel and indicator panel are visible. The column button has no default functionality. You can however, handle the TreeListState.ColumnButtonPressed event to perform specific actions each time an end-user clicks the button. The table below lists the main properties which affect the element’s appearance: See Also Feedback
https://docs.devexpress.com/WindowsForms/1064/controls-and-libraries/tree-list/visual-elements/column-button
2021-10-16T13:24:27
CC-MAIN-2021-43
1634323584567.81
[array(['/WindowsForms/images/vecolumnbutton3369.png', 'veColumnButton'], dtype=object) ]
docs.devexpress.com
Auto Register¶ This section shows how to enable the lmp-device-auto-register recipe. This recipe creates a systemd oneshot service that will automatically register the device on first boot once it has internet connectivity. This is done by providing an API Token that has devices:create scope. Warning Do not use the API Token in production. The use of an API Token is only intended for usage in a development environment. For more information, read Manufacturing Process for Device Registration. As customers move closer to production, do not hesitate to contact Foundries.io to discuss the best practices to automatically register devices. The recipe lmp-device-auto-register is provided by meta-lmp and can be added by customizing your meta-subscriber-overrides.git. Prerequisites¶ To follow this section, it is important to have: - Completed the Getting started until the Flash your Device section. Creating Token¶ Go to Tokens and create a new Api Token by clicking on + New Token. Complete with a Description and the Expiration date and select next. Select the device:create token and select your Factory. You can later revoke this access and set up a new token once you are familiar with the API Access.-device-auto-register \ packagegroup-core-full-cmdline-extended \ ${@bb.utils.contains('LMP_DISABLE_GPLV3', '1', '', '${CORE_IMAGE_BASE_INSTALL_GPLV3}', d)} \ " Configuring the LmP Auto Register¶ Create the required directory structure for this recipe: mkdir -p recipes-support/lmp-device-auto-register/lmp-device-auto-register Create the api-token file and replace <YOUR_API_TOKEN> with the scoped token created in the previous steps: gedit recipes-support/lmp-device-auto-register/lmp-device-auto-register/api-token recipes-support/lmp-device-auto-register/lmp-device-auto-register/api-token: <YOUR_API_TOKEN> Create the file lmp-device-auto-register.bbappend in order to give the recipe access to the api-token file. gedit recipes-support/lmp-device-auto-register/lmp-device-auto-register.bbappend recipes-support/lmp-device-auto-register/lmp-device-auto-register.bbappend: FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" Add the changed files, commit and push: git add recipes-samples/images/lmp-factory-image.bb git add recipes-support/lmp-device-auto-register/lmp-device-auto-register/api-token git add recipes-support/lmp-device-auto-register/lmp-device-auto-register.bbappend git commit -m "lmp-device-auto-register: Adding recipe" git push The latest Target named platform-devel should be the CI job you just created. When FoundriesFactory CI finishes all jobs, download and flash the image. Testing Auto Register¶ After booting the new image, if your device is connected to the internet, the device will automatically register to your Factory and should be visible by navigating to the web interface at, clicking your Factory and selecting the Devices tab. On your device, use the following command to list the lmp-device-auto-register service: systemctl list-unit-files | grep enabled | grep lmp-device-auto-register Example Output: lmp-device-auto-register.service enabled enabled Verify the lmp-device-auto-register application status: systemctl status lmp-device-auto-register Example Output: lmp-device-auto-register.service - Script to auto-register device into Factory Loaded: loaded (/usr/lib/systemd/system/lmp-device-auto-register.service; enabled; vendor preset: enabled) Active: active (exited) since Sun 2021-09-12 17:34:06 UTC; 5min ago Process: 774 ExecStart=/usr/bin/lmp-device-auto-register (code=exited, status=0/SUCCESS) Main PID: 774 (code=exited, status=0/SUCCESS)
https://docs.foundries.io/latest/user-guide/lmp-device-auto-register/lmp-device-auto-register.html
2021-10-16T11:21:02
CC-MAIN-2021-43
1634323584567.81
[]
docs.foundries.io
Create a device profile in Microsoft Intune Device profiles allow you to add and configure settings, and then push these settings to devices in your organization. You have some options when creating policies: Administrative templates: On Windows 10 and later devices, these templates are ADMX settings that you configure. If you're familiar with ADMX policies or group policy objects (GPO), then using administrative templates is a natural step to Microsoft Intune and Endpoint Manager. For more information, see Administrative Templates Baselines: On Windows 10 and later devices, these baselines include preconfigured security settings. If you want to create security policy using recommendations by Microsoft security teams, then security baselines are for you. For more information, see Security baselines. Settings catalog: On Windows 10 and later devices, use the settings catalog to see all the available settings, and in one location. For example, you can see all the settings that apply to BitLocker, and create a policy that just focuses on BitLocker. On macOS devices, use the settings catalog to configure Microsoft Edge version 77 and settings. For more information, see Settings catalog. On macOS, continue using the preference file to: - Configure earlier versions of Microsoft Edge - Configure Edge browser settings that aren't in settings catalog Templates: On Android, iOS/iPadOS, macOS, and Windows devices, the templates include a logical grouping of settings that configure a feature or concept, such as VPN, email, kiosk devices, and more. If you're familiar with creating device configuration policies in Microsoft Intune, then you're already using these templates. For more information, including the available templates, see Apply features and settings on your devices using device profiles. This article: - Lists the steps to create a profile. - Shows you how to add a scope tag to "filter" your policies. - more Then, choose the profile. Depending on the platform you choose, the settings you can configure are different. The following articles describe the different profiles: - for Endpoint (Windows) - Mobility Extensions (MX) profile (Android device administrator) - Network boundary (Windows) - OEMConfig (Android Enterprise) - PKCS certificate - PKCS imported certificate - Preference file (macOS) - SCEP certificate - Secure assessment (Education) (Windows) - Shared multi-user device (Windows) - Telecom expenses (Android device administrator, iOS, iPadOS) - Trusted certificate - VPN - Wi-Fi - Windows health monitoring - Wired networks (macOS) For example, if you select iOS/iPadOS for the platform, your options look similar to the following profile: If you select Windows 10 and later for the platform, your. For more version numbers, see Windows 10 release information., apply to devices, or apply to both:.
https://docs.microsoft.com/en-NZ/mem/intune/configuration/device-profile-create
2021-10-16T12:31:08
CC-MAIN-2021-43
1634323584567.81
[array(['media/device-profile-create/devices-overview.png', 'In Endpoint Manager and Microsoft Intune, select Devices to see what you can configure and manage.'], dtype=object) array(['media/device-profile-create/create-device-profile.png', 'Create an iOS/iPadOS device configuration policy and profile in Endpoint Manager and Microsoft Intune.'], dtype=object) array(['media/device-profile-create/windows-create-device-profile.png', 'Create a Windows device configuration policy and profile in Endpoint Manager and Microsoft Intune.'], dtype=object) ]
docs.microsoft.com
Download a list of users in Azure Active Directory portal Azure Active Directory (Azure AD) supports bulk user import (create) operations. Required permissions To download the list of users from the Azure AD admin center, you must be signed in with a user assigned to one or more organization-level administrator roles in Azure AD (User Administrator is the minimum role required). Guest inviter and application developer are not considered administrator roles. To download a list of users Sign in to your Azure AD organization with a User administrator account in the organization. Navigate to Azure Active Directory > Users. Then select the users you wish to include in the download by ticking the box in the left column next to each user. Note: At this time, there is no way to select all users for export. Each one must be individually selected. In Azure AD, select Users > Download users. On the Download users page, select Start to receive a CSV file listing user profile properties. If there are errors, you can download and view the results file on the Bulk operation results page. The file contains the reason for each error. Note The download file will contain the filtered list of users based on the scope of the filters applied. The following user attributes are included: - userPrincipalName - displayName - surname - givenName - objectId - userType - jobTitle - department - accountEnabled - usageLocation - streetAddress - state - country - physicalDeliveryOfficeName - city - telephoneNumber - mobile - authenticationAlternativePhoneNumber - authenticationEmail - alternateEmailAddress - ageGroup - consentProvidedForMinor - legalAgeGroupClassification Check status You can see the status of your pending bulk requests in the Bulk operation results page. Bulk download service limits Each bulk activity to create a list of users can run for up to one hour. This enables creation and download of a list of up to 500,000 users.
https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/users-bulk-download
2021-10-16T13:44:24
CC-MAIN-2021-43
1634323584567.81
[]
docs.microsoft.com
OS). “true” (case-insensitive), the system-level prefixes will not be searched for site-packages; otherwise they will.. Note An executable line in a .pth file is run at every Python startup, regardless of whether a particular module is actually going to be used. Its impact should thus be kept to a minimum. The primary intended purpose of executable lines is to make the corresponding module(s) importable (load 3rd-party import hooks, adjust PATH etc). Any other initialization is supposed to be done upon a module’s actual import, if and when it happens. Limiting a code chunk to a single line is a deliberate measure to discourage putting anything more complex here.. USER_SITE¶ Path to the user site-packages for the running Python. Can be Noneif getusersitepackages()hasn’t been called yet. Default value is ~/.local/lib/pythonX.Y/site-packagesfor UNIX and non-framework macOS builds, ~/Library/Python/X.Y/lib/python/site-packagesfor macOS framework builds, and %APPDATA%\Python\PythonXY\site-packageson Windows. This directory is a site directory, which means that .pthfiles in it will be processed. site. USER_BASE¶ Path to the base directory for the user site-packages. Can be Noneif getuserbase()hasn’t been called yet. Default value is ~/.localfor UNIX and macOS non-framework builds, ~/Library/Python/X.Yfor macOS framework builds, and %APPDATA%\Pythonfor Windows. This value is used by Distutils to compute the installation directories for scripts, data files, Python modules, etc. for the user installation scheme. See also PYTHONUSERBASE. USER_BASE. To determine if the user-specific site-packages was added to sys.path ENABLE_USER_SITEshould be used. New in version 3.2. Command Line Interface¶ The site module also provides a way to get the user directories from the command line: $ python3 -m site --user-site /home/user/.local/lib/python3.
https://docs.python.org/3/library/site.html?highlight=pyvenv.cfg
2021-10-16T12:35:51
CC-MAIN-2021-43
1634323584567.81
[]
docs.python.org
Camera Preferences Parameter Description Tools Initial Animation Mode: Determines which animation mode is enabled when the scene is opened. Show Locked Drawings As Outlines: In the Camera view, locked elements are displayed as outlines only. Bounding Box Selection Style: In the Camera view, selected elements are not highlighted in colour but displayed with the bounding box only. Nudging Factor: The nudging increment value. Set Keyframe on All Functions with the Transform Tool: When this option is selected, the Transform tool will create a keyframe for all functions of the selected pegs, including the functions that normally would not be affected by the transformation.. Paste/Drag&Drop Adds. Use Rotation Lever with Transformation Tools: Lets you see the rotation lever when using the transformation tools. When this option is deselected, hovering your cursor over the corner of an element’s bounding box is sufficient to rotate it. Zoom Settings Camera View Default Zoom: The default zoom value for the Camera view. Top/Side View Default Zoom: The default zoom value for the Top/Side views. Settings Thumbnail Size: The thumbnail size, in pixels, that appears in the Top and Side views. The size, in pixels, of the smaller bitmap version of your image. When you import a bitmap image into a scene, a smaller version of it is created in order to accelerate the compositing and playback processes. Override Small Bitmap Files: Enable this option if you want the system to generate new versions of the existing smaller bitmap files, every time you modify the Small Bitmap Resolution value. When the option is disabled, the existing smaller bitmap versions will not be regenerated and will keep the same resolution as when they were created. TV Safety: The ratio value for the TV Safety frame in proportion to the regular camera frame. Wash Background Enable in Camera: Dulls background bitmaps in the Camera view. This allows you to see other elements clearly, such as the ones that have not yet been painted. Enable in Camera Drawing Mode: Dulls background bitmaps in Camera view while using the drawing tools. This allows you to see other elements clearly, such as the ones that have not yet been painted. Wash Background Percentage: The Wash Percentage value. Colour Space Display Colour Space: The colour space in which to display the OpenGL preview in the Camera view. In OpenGL View mode, colours in original bitmap images are converted from the colour space selected in their layer properties to the colour space selected in this preference, without getting converted to the project colour space first, and colours in bitmap and vector drawing layers are not converted at all. In Render View mode, this preference is not used. However, this option is also the default colour space selected in the drop-down at the bottom of the Camera view when adding a Camera view to the workspace or when resetting or switching your workspace, and the colour space in this drop-down is the one used in Render View mode. Continuity value for new keyframes and control points. Preview Wash Enable For Out of Date Previews: When you disable the automatic render preview, you must click the Update Preview button in the Rendering toolbar or the Camera view bottom toolbar in order to recalculate and update the preview. When this option is enabled, if the current render preview is out of date and requires you to press the Update Preview button, the Camera view will display the current preview as washed out colours. Wash Background Percentage: This is the value, in percentage, by which the outdated preview will be washed out. Inverse Kinematics Min/Max Angle Constraint Weight: This value acts similar to the Stiffness setting in the Inverse Kinematics Properties panel. This option only affects the minimum and maximum angle values set using the Min/Max Angle Mode. The higher the value, the stronger your need to move the body part to approach the minimum and maximum values set. Although the maximum value goes up to 1.0, in a production setting, the most practical value to use would be closer to 0.1.
https://docs.toonboom.com/help/harmony-20/premium/preferences-guide/camera-preference.html
2021-10-16T13:06:21
CC-MAIN-2021-43
1634323584567.81
[]
docs.toonboom.com
Welcome to specio’s documentation!¶ Specio is a Python library that provides an easy interface to read hypersectral data. It is cross-platform, runs on Python 2.x and 3.x, and is easy to install. This package is heavily inspired from the imageio: Getting started¶ Information to install, test, and contribute to the package. User documentation¶ The user documentation containing information about each supported format as well as the API documentation. Developer documentation¶ The developer documentation containing information to develop a new plugin and add support to new spectroscopy format.
https://specio.readthedocs.io/en/latest/
2021-10-16T12:29:45
CC-MAIN-2021-43
1634323584567.81
[]
specio.readthedocs.io
Table of Contents Product Index Toon Girls Skin Pack is a hand-crafted set of stylized textures and makeup with a wide variety of options. First, six skin tones ranging from pale skin to dark skin allow you to get just that right look. Then, using the power of the Layered Image Editor, you can apply a range of eyelash, eyebrow, eyeshadow, lipstick, eyeliner and blush options to get your toon girl looking her best. You can even layer more than one at a time for even more looks. Eyeshadow options include 11 smokey eyes and 11 matching upper-lid only shadows. Nine lipstick colors are included. Several bump options are also included, along with five lip glossiness settings. Two eyeliner options, four beauty mark options add just the right touch. Choose from stylized brows, or a more realistic look, available in five colors each. Finally, 7 stylized lashes and a thick, luxurious realistic lash are offered. All texture and bump maps are hand painted, from the lips to the fingernails. This is not a photo-realistic texture set. Created and tested in DAZ Studio 4.6. Based on Victoria 5 UVs. Requires Victoria 5 for Genesis, or Genesis 2 Basic Female. *Light Set used in Promo images.
http://docs.daz3d.com/doku.php/public/read_me/index/16935/start
2021-10-16T12:06:23
CC-MAIN-2021-43
1634323584567.81
[]
docs.daz3d.com
Machines with secure aspects enabled by FoundriesFactory¶ LmP provides machines with secure aspects enabled by default when using FoundriesFactory. The purpose of these machines is to gather the needed configuration to enable secure boot and other security aspects and to provide a set of artifacts to help in the process needed to have the hardware board set to secure boot. Warning It is recommended to read Secure Boot on IMX before proceeding with the following steps. Supported machines¶ - NXP iMX6ULL-EVK Secure: imx6ullevk-secis the imx6ullevkmachine configured to have secure boot enabled by default. - NXP iMX8M-MINILPD4 EVK Secure: imx8mm-lpddr4-evk-secis the imx8mmevkmachine configured to have secure boot and secure storage enabled by default. - NXP Toradex Apalis-iMX6 Secure: apalis-imx6-secis the apalis-imx6machine configured to have secure boot and secure storage enabled by default. How to enable¶ The suggested way to enable a secure machine in a factory is to select the correct platform when creating the factory. This might not be ideal as the customer might want to evaluate their setup in an open state for easier development. The platform definition comes from ci-scripts but due to computation limits, the CI is configured to decline changes in the machines: parameter. When attempting to replace or add a new machine in a factory, customers face this issue: remote: A new machine is being added: {'<machine>'} remote: ERROR: Please contact support to update machines remote: error: hook declined to update refs/heads/master To<factory>/ci-scripts.git ! [remote rejected] master -> master (hook declined) In this case, ask a support engineer to update the factory-config.yml file in ci-scripts git repository for your FoundriesFactory to the following configuration: machines: - <machine-sec> mfg_tools: - machine: <machine-sec> params: DISTRO: lmp-mfgtool IMAGE: mfgtool-files EXTRA_ARTIFACTS: mfgtool-files.tar.gz UBOOT_SIGN_ENABLE: "1" How to use¶ Trigger a platform build and wait until the target is created. Follow the steps from Supported Boards to prepare the hardware and download the same artifacts. The list of artifacts downloaded should be: mfgtool-files-<machine-sec>.tar.gz lmp-factory-image-<machine-sec>.wic.gz SPL-<machine-sec> sit-<machine-sec>.bin u-boot-<machine-sec>.itb Note For the i.MX8* based machines, the SPL binary is included in imx-boot and the user should refer to imx-boot-<machine-sec> through this document. Expand the tarballs: gunzip lmp-factory-image-<machine-sec>.wic.gz tar -zxvf mfgtool-files-<machine-sec>.tar.gz The resultant directory tree should look like the following: ├── lmp-factory-image-<machine-sec>.wic ├── mfgtool-files-<machine-sec> │ ├── bootloader.uuu │ ├── close.uuu │ ├── full_image.uuu │ ├── fuse.uuu │ ├── readme.md │ ├── SPL-mfgtool │ ├── u-boot-mfgtool.itb │ ├── uuu │ └── uuu.exe ├── mfgtool-files-<machine-sec>.tar.gz ├── SPL-<machine-sec> ├── sit-<machine-sec>.bin └── u-boot-<machine-sec>.itb Follow the readme.md under mfgtool-files-<machine-sec> for instructions to sign the SPL images, to fuse, and close the board. Warning The fuse and close procedures are irreversible. The instructions from the readme.md file should be followed and executed with caution and only after understanding the critical implication of those commands. How to use custom keys¶ Create the keys¶ There are different ways to create and store the needed keys for the secure boot. One important reference to understand how to generate the PKI tree is i.MX Secure Boot on HABv4 Supported Devices application note from NXP. In addition, the U-Boot project also includes a documentation on Generating a fast authentication PKI tree. Warning It is critical that the keys created in this process must be stored in a secure and safe place. Once the keys are fused to the board and it is closed, that board will only boot signed images. So the keys are required in future steps. Generate the MfgTools scripts¶ There is a set of scripts to help with creating the set of commands used to fuse the key into the fuse banks of <machine>, and to close the board which configures the board to only boot signed images. - Clone the lmp-toolsfrom GitHub git clone git://github.com/foundriesio/lmp-tools.git - Export the path to where keys are stored export KEY_FILE=/path-to-key-files/<efusefile> - Generate the script to fuse the board ./lmp-tools/security/<soc>/gen_fuse.sh -s $KEY_FILE -d ./fuse.uuu - Generate the script to close the board ./lmp-tools/security/<soc>/gen_close.sh -s $KEY_FILE -d ./close.uuu - Install the scripts to the meta-subscriber-overrides: mkdir -p <factory>/meta-subscriber-overrides/recipes-support/mfgtool-files/mfgtool-files/<machine> cp fuse.uuu <factory>/meta-subscriber-overrides/recipes-support/mfgtool-files/mfgtool-files/<machine> cp close.uuu <factory>/meta-subscriber-overrides/recipes-support/mfgtool-files/mfgtool-files/<machine> cat <factory>/meta-subscriber-overrides/recipes-support/mfgtool-files/mfgtool-files_%.bbappend The content of mfgtool-files_%.bbappend should be: FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" SRC_URI_append_<machine> = " \ \ \ " do_deploy_prepend_<machine>() { install -d ${DEPLOYDIR}/${PN} install -m 0644 ${WORKDIR}/fuse.uuu ${DEPLOYDIR}/${PN}/fuse.uuu install -m 0644 ${WORKDIR}/close.uuu ${DEPLOYDIR}/${PN}/close.uuu } Tip Replace the machine name in case the factory is using a custom machine name. - Inspect the changes and push it accordingly git status The result of git status should look like: On branch devel Your branch is up to date with 'origin/devel'. Changes to be committed: (use "git restore --staged <file>..." to unstage) new file: recipes-support/mfgtool-files/mfgtool-files/<machine>/close.uuu new file: recipes-support/mfgtool-files/mfgtool-files/<machine>/fuse.uuu new file: recipes-support/mfgtool-files/mfgtool-files_%.bbappend The changes add the UUU scripts to the mfgtool-files artifacts of next targets. Run the fuse.uuu and close.uuu to fuse the custom keys and close the board, respectively. Warning The scripts fuse.uuu and close.uuu include commands which result is irreversible. The scripts should be executed with caution and only after understanding its critical implications.
https://docs.foundries.io/latest/reference-manual/security/secure-machines.html
2021-10-16T12:07:21
CC-MAIN-2021-43
1634323584567.81
[]
docs.foundries.io
This page describes the Exalate administration menu. The administration panel provides access to the main Exalate functionality and its configuration. After installing Exalate for Zendesk, Azure DevOps, GitHub, ServiceNow, and HP QC/ALM for the first time, only the License Details tab is accessible. Getting Started A short step-by-step guide on how to configure your first synchronization. General Settings Basic app settings, which depend on the platform. More details. Connections A list of available connections between instances. If you don't have any connections yet, you can configure it under this tab. More details. Exalate Notifications You can add users to receive email notifications every time Exalate raises a synchronization error. Entity Sync Status Under this tab, you can check the issue sync status and start issue synchronization. More details. Errors Shows a list of errors when the synchronization is blocked. You can manage errors and find the error details. More details. Bulk Connect Under this tab, you can connect existing issues between instances with a simple mapping file. More details. Triggers Here you can configure automatic synchronization with scripts. More details. Sync Queue This utility helps to monitor the synchronization progress. More details. License Details Find information about the Exalate instance and (or) the Network license here. More details. Clean-up Tools This helps stop issue sync and remove sync information. Usually used to resolve unhandled synchronization problems. More details.
https://docs.idalko.com/exalate/display/ED/Exalate+Menu+Panel
2021-10-16T12:12:12
CC-MAIN-2021-43
1634323584567.81
[]
docs.idalko.com
Channel overview What is a channel? A channel is a forum where members can collaborate, have discussions, and share posts with other members. You can create an unlimited number of channels in your account. Channels can be organized based on teams, projects, interest groups, etc. Common channel examples: - marketing - IT management - inbound sales - music lovers - sports - product management - new website launch How to use channels in Kissflow One example of a channel is for your entire HR team. Members can use the channel to post interesting links they’ve found, share notes from a meeting, conduct a poll, or post pictures from a recent offsite. The Everyone channel Every Kissflow account comes with a system-generated channel called Everyone. All current users are automatically added as members. When you add new users, they are automatically added to the Everyone channel. Users cannot be removed from this channel.
https://docs.kissflow.com/article/5tm2q0wsqd-channel-overview
2021-10-16T11:10:47
CC-MAIN-2021-43
1634323584567.81
[array(['https://files.helpdocs.io/vy1bn54mxh/articles/5tm2q0wsqd/1560750938783/everyone-channel.png', None], dtype=object) ]
docs.kissflow.com
Changes in Behavior - CloudApp Description Change: the description of a CloudApp is no longer populated with the short_descriptionfrom the CAT. Previously, if a user launched a CloudApp and didn't set a description, the system would copy the short_descriptionfrom the CAT -- this is no longer the case. - Fixed a bug that sometimes prevented user time zone settings from being saved - Fixed a bug related to selecting days when editing an existing Schedule in Designer - Fixed a bug that would sometimes show an error growler when deleting a CloudApp
https://docs.rightscale.com/release-notes/self-service/2016/04/28.html
2021-10-16T12:37:14
CC-MAIN-2021-43
1634323584567.81
[]
docs.rightscale.com
Package pprof Overview ▹ Overview ▾)) }() func Cmdline ¶ func Cmdline(w http.ResponseWriter, r *http.Request) Cmdline responds with the running program's command line, with arguments separated by NUL bytes. The package initialization registers it as /debug/pprof/cmdline. func Handler ¶ func Handler(name string) http.Handler Handler returns an HTTP handler that serves the named profile. func Index ¶ ¶ func Profile(w http.ResponseWriter, r *http.Request) Profile responds with the pprof-formatted cpu profile. Profiling lasts for duration specified in seconds GET parameter, or for 30 seconds if not specified. The package initialization registers it as /debug/pprof/profile. func Symbol ¶ func Symbol(w http.ResponseWriter, r *http.Request) Symbol looks up the program counters listed in the request, responding with a table mapping program counters to function names. The package initialization registers it as /debug/pprof/symbol. func Trace ¶ 1.5 func Trace(w http.ResponseWriter, r *http.Request) Trace responds with the execution trace in binary form. Tracing lasts for duration specified in seconds GET parameter, or for 1 second if not specified. The package initialization registers it as /debug/pprof/trace.
https://docs.studygolang.com/pkg/net/http/pprof/
2021-10-16T12:13:53
CC-MAIN-2021-43
1634323584567.81
[]
docs.studygolang.com
Updating Statistics with ANALYZE Updating Statistics with ANALYZE The most important prerequisite for good query performance is to begin with accurate statistics for the tables. Updating statistics with the ANALYZE statement enables the query planner to generate optimal query plans. When a table is analyzed, information about the data is stored in the system catalog tables. If the stored information is out of date, the planner can generate inefficient plans. Generating Statistics Selectively Running ANALYZE with no arguments updates statistics for all tables in the database. This can be a very long-running process and it is not recommended. You should ANALYZE tables selectively when data has changed or use the analyzedb utility. Running ANALYZE on a large table can take a long time. If it is not feasible to run ANALYZE on all columns of a very large table, you can generate statistics for selected columns only using ANALYZE table(column, ...). Be sure to include columns used in joins, WHERE clauses, SORT clauses, GROUP BY clauses, or HAVING clauses. SELECT partitiontablename from pg_partitions WHERE tablename='parent_table; Improving Statistics Quality There is a trade-off between the amount of time it takes to generate statistics and the quality, or accuracy, of the statistics. To allow large tables to be analyzed in a reasonable amount of time, ANALYZE takes a random sample of the table contents, rather than examining every row. To increase the number of sample values for all table columns adjust the default_statistics_target configuration parameter. The target value ranges from 1 to 1000; the default target value is 100. The default_statistics_target variable applies to all columns by default, and specifies the number of values that are stored in the list of common values. A larger target may improve the quality of the query planner’s estimates, especially for columns with irregular data patterns. default_statistics_target can be set at the master/session level and requires a reload. When to Run ANALYZE - after loading data, - after CREATE INDEX operations, - and after INSERT, UPDATE, and DELETE operations that significantly change the underlying data. Configuring Automatic Statistics Collection The gp_autostats_mode configuration parameter, together with the gp_autostats_on_change_threshold parameter, determines when an automatic analyze operation is triggered. When automatic statistics collection is triggered, the planner adds an ANALYZE step to the query. By default, gp_autostats_mode is on_no_stats, which triggers statistics collection for CREATE TABLE AS SELECT, INSERT, or COPY operations on any table that has no existing statistics. Setting gp_autostats_mode to on_change triggers statistics collection only when the number of rows affected exceeds the threshold defined by gp_autostats_on_change_threshold, which has a default value of 2147483647. Operations that can trigger automatic statistics collection with on_change are: CREATE TABLE AS SELECT, UPDATE, DELETE, INSERT, and COPY. Setting gp_autostats_mode to none disables automatics statistics collection. For partitioned tables, automatic statistics collection is not triggered if data is inserted from the top-level parent table of a partitioned table. But automatic statistics collection is triggered if data is inserted directly in a leaf table (where the data is stored) of the partitioned table.
https://gpdb.docs.pivotal.io/6-16/best_practices/analyze.html
2021-10-16T12:27:03
CC-MAIN-2021-43
1634323584567.81
[]
gpdb.docs.pivotal.io
The io-ports-control interface io-ports-control allows access to all I/O ports, including the ability to write to /dev/port to change the I/O port permissions, the privilege level of the calling process, and disabling interrupts. Auto-connect: no Requires snapd version 2.21+. This is a snap interface. See Interface management and Supported interfaces for further details on how interfaces are used. Last updated a month ago. Help improve this document in the forum.
https://docs.snapcraft.io/the-io-ports-control-interface/7848
2019-03-18T15:48:06
CC-MAIN-2019-13
1552912201455.20
[]
docs.snapcraft.io
Route Text Messages and Chats to Qualified Agents with Skills-Based Routing (Generally Available) Where: This feature is available to org with Live Agent or LiveMessage. Live Agent is available in Performance and Developer edition orgs that were created after June 14, 2012, and in Unlimited and Enterprise edition orgs with the Service Cloud. LiveMessage is available through a Digital Engagement add-on. LiveMessage in Lightning Experience is available in Enterprise, Performance, Unlimited, and Developer editions with the Service Cloud. Why: Skills-based routing allows work items to be routed using more sophisticated and dynamic criteria than queue-based routing. How: To utilize skills-based routing, enable the feature, enable and create skills, create service resources for agents, and assign skills to service resources. Then you set up routing.
http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_omnichannel_skills_based_routing.htm
2019-03-18T15:38:14
CC-MAIN-2019-13
1552912201455.20
[]
releasenotes.docs.salesforce.com
", "MetricDefinitions": [ { "Name": "string", "Regex": "string" } ], "TrainingImage": "string", "TrainingInputMode": "string" }, "CreationTime": number, "EnableInterContainerTrafficEncryption": boolean, "EnableNetworkIsolation": boolean, "FailureReason": "string", "FinalMetricDataList": [ { "MetricName": "string", "Timestamp": number, "Value": number } ], "HyperParameters": { "string" : "string" }, "InputDataConfig": [ { "ChannelName": "string", "CompressionType": "string", "ContentType": "string", "DataSource": { }, "TrainingEndTime": number, "TrainingJobArn": "string", "TrainingJobName": "string", "TrainingJobStatus": "string", "TrainingStartTime": - CreationTime A timestamp that indicates when the training job was created. Type: Timestamp - algorithm in distributed training.. Note The Semantic Segmentation built-in algorithm does not support network isolation. Type: Boolean - 20 items. - HyperParameters Algorithm-specific parameters. Type: String to string map Key Length Constraints: Maximum length of 256. Key Pattern: .* Value Length Constraints: Maximum length of 256. Value Pattern: .* - InputDataConfig An array of Channelobjects that describes each data input channel. Type: Array of Channel objects Array Members: Minimum number of 1 item. Maximum number of 8 items. - - SecondaryStatusTransitions A history of all of the secondary statuses that the training job has transitioned through. Type: Array of SecondaryStatusTransition objects - StoppingCondition The condition under which to stop the training job. Type: StoppingCondition object - TrainingEndTime Indicates..:
https://docs.aws.amazon.com/sagemaker/latest/dg/API_DescribeTrainingJob.html
2019-03-18T16:12:13
CC-MAIN-2019-13
1552912201455.20
[]
docs.aws.amazon.com
- Install MongoDB > - Install MongoDB Community Edition > - Install MongoDB Community Edition on Linux > - Install MongoDB Community Edition on Ubuntu Install MongoDB Community Edition on Ubuntu¶ On this page Overview¶ The following tutorial uses a package manager to install MongoDB 4.2 Community Edition on LTS Ubuntu Linux systems. Production Notes Before deploying MongoDB in a production environment, consider the Production Notes document. MongoDB Version¶ This tutorial installs MongoDB 4.2 Community Edition . For other versions of MongoDB, refer to the corresponding version of the manual. Platform Support¶ MongoDB only provides packages for the following 64-bit LTS (long-term support) Ubuntu releases: - 14.04 LTS (trusty) - 16.04 LTS (xenial) - 18.04 LTS (bionic) See Supported Platforms for more information. Windows Subsystem for Linux (WSL) - Unsupported MongoDB does not support WSL, and users on WSL have encountered various issues installing on WSL. For examples, Install MongoDB Community Edition using .deb Packages. Important The mongodb-org-unstable package is officially maintained and supported by MongoDB Inc. and kept up-to-date with the most recent MongoDB releases. This installation procedure uses the mongodb-org-unstable package. The mongodb package provided by Ubuntu is not maintained by MongoDB Inc. and conflicts with the mongodb-org-unstable list file for MongoDB.¶ Create the list file /etc/apt/sources.list.d/mongodb-org-4.2.list for your version of Ubuntu. Click on the appropriate tab for your version of Ubuntu. If you are unsure of what Ubuntu version the host is running, open a terminal or shell on the host and execute lsb_release -dc. - /etc/apt/sources.list.d/mongodb-org-4.2.list file for Ubuntu 18.04 (Bionic): The following instruction is for Ubuntu 16.04 (Xenial). For Ubuntu 14.04 (Trusty) or Ubuntu 18.04 (Bionic), click on the appropriate tab. Create the /etc/apt/sources.list.d/mongodb-org-4.2.list file for Ubuntu 16.04 (Xenial): The following instruction is for Ubuntu 14.04 (Trusty). For Ubuntu 16.04 (Xenial) or Ubuntu 18.04 (Bionic), click on the appropriate tab. Create the /etc/apt/sources.list.d/mongodb-org-4.2.list file for Ubuntu 14.04 (Trusty): Install the MongoDB packages.¶ You can install either the latest stable version of MongoDB or a specific version of MongoDB. - Install the latest version of MongoDB. - Install a specific release of MongoDB. To install the latest stable version, issue the following To install a specific release, you must specify each component package individually along with the version number, as in the following example: If you only install mongodb-org-unstable=4.1.9 and do not include the component packages, the latest version of each MongoDB package will be installed regardless of what version you specified. Optional. Although you can specify any available version of MongoDB, apt-get will upgrade the packages when a newer version becomes available. To prevent unintended upgrades, you can pin the package at the currently installed version: For help with troubleshooting errors encountered while installing MongoDB on Ubuntu, see our troubleshooting guide. Run MongoDB Community Edition¶ - Production Notes - Before deploying MongoDB in a production environment, consider the Production Notes document. -. Important The following instructions assume that you have downloaded the official MongoDB mongodb-org packages, and not the unofficial mongodb package provided by Ubuntu.. See also The recommended procedure to install is through the package manager, as detailed on this page. However, if you choose to install by directly downloading the .tgz file, see Install using .tgz Tarball on Ubuntu.
https://docs.mongodb.com/master/tutorial/install-mongodb-on-ubuntu/?_ga=2.228094813.1199241773.1509980407-554659428.1496709079
2019-03-18T16:47:10
CC-MAIN-2019-13
1552912201455.20
[]
docs.mongodb.com
Bring in More Data Through Google Big Query and Heroku Connections You can now load up to 100 million rows per object through Google Big Query and Heroku connections. We increased this limit from 20 million rows. Where: This feature applies to Einstein Analytics in Lightning Experience and Salesforce Classic. Einstein Analytics is available in Developer Edition and for an extra cost in Enterprise, Performance, and Unlimited editions.
https://releasenotes.docs.salesforce.com/en-us/winter19/release-notes/rn_bi_integrate_connectors_limits.htm
2019-03-18T16:48:20
CC-MAIN-2019-13
1552912201455.20
[]
releasenotes.docs.salesforce.com