Datasets:
text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
# Patroni cluster (with Zookeeper) in a docker swarm on a local machine
Intro
-----
There probably is no way one who stores some crucial data (in particular, using SQL databases) can possibly dodge from the thoughts of building some kind of safe cluster, distant guardian to protect consistency and availability at all times. Even if the main server with all the precious data gets knocked out deadly - the show must go on, right? This basically means the database must still be available and data be up-to-date with the one on the failed server.
As you might have noticed, there are dozens of ways to go and Patroni is just one of them. There is plenty of articles providing a more or less detailed comparison of the options available, so I assume I'm free to skip the part of luring you into Patroni's side. Let's start off from the point where among others you are already leaning towards Patroni and are willing to try that out in a more or less real-case setup.
As for myself, I did try a couple of other solutions and in one of them (won't name it) my issue seems to be still hanging open and not answered on their GitHub even though months have passed.
Btw, I am not a DevOps engineer originally so when the need for the high-availability cluster arose and I went on I would hit my bones against the bottom of every single bump and every single post and rock down the road. Hope this tutorial will help you out to get the job done with as little pain as it is possible.
If you don't want any more explanations and lead-ins, jump right in.
Otherwise, you might want to read some more notes on the setup I went on with.
One more Patroni tut, huh?### Do we need one more Patroni tut?
Let's face it, there are quite enough tutorials published on how to set up the Patroni cluster. This one is covering deployment in a docker swarm with Zookeeper as a DCS. So why zookeeper and why docker swarm?
#### Why Zookeeper?
Actually, it's something you might want to consider seriously choosing a Patroni setup for your production.
The thing is that Patroni uses third-party services basically to establish and maintain communication among its nodes, the so-called DCS (Dynamic Configuration Storage).
If you have already studied tutorials on Patroni you probably noticed that the most common case is to implement communication through the 'etcd' cluster.
The notable thing about etcd is here (from its faq page):
```
Since etcd writes data to disk, its performance strongly depends on disk
performance. For this reason, SSD is highly recommended.
```
If you don't have SSD on each machine you are planning to run your etcd cluster, it's probably not a good idea to choose it as a DCS for Patroni. In a real production scenario, it is possible that you simply overwhelm your etcd cluster, which might lead to IO errors. Doesn't sound good, right?
So here comes Zookeeper which stores all its data in memory and might actually come in handy if your servers lack SDDs but have got plenty of RAM.
### Why docker swarm?
In my situation, I had no other choice as it was one of the business requirements to set it up in a docker swarm. So if by circumstances it's your case as well, you're exactly in the right spot!
But for the rest of the readers with the "testing and trying" purposes, it comes across as quite a distant choice too as you don't need to install/prepare any third-party services (except for docker of course) or place on your machine dozens of dependencies.
Guess it's not far from the truth that we all have Docker engine installed and set up everywhere anyway and it's convenient to keep everything in containers. With one-command tuning, docker is good enough to run your first Patroni cluster locally without virtual machines, Kubernetes, and such.
So if you don't want to dig into other tools and want to accomplish everything neat and clean in a well-known docker environment this tutorial could be the right way to go.
Some extra
----------
In this tutorial, I'm also planning to show various ways to check on the cluster stats (to be concrete will cover all 3 of them) and provide a simple script and strategy for a test run.
Suppose it's enough of talking, let's go ahead and start practicing.
Docker swarm
------------
For a quick test of deployment in the docker swarm, we don't really need multiple nodes in our cluster. As we are able to scale our services at our needs (imitating failing nodes), we are going to be fine with just one node working in a swarm mode.
I come from the notion that you already have the Docker engine installed and running. From this point all you need is to run this command:
```
docker swarm init
//now check your single-node cluster
docker node ls
ID HOSTNAME STATUS AVAILABILITY
a9ej2flnv11ka1hencoc1mer2 * floitet Ready Active
```
> The most important feature of the docker swarm is that we are now able to manipulate not just simple containers, but services. Services are basically abstractions on top of containers. Referring to the OOP paradigm, docker service would be a class, storing a set of rules and container would be an object of this class. Rules for services are defined in docker-compose files.
>
>
Notice your node's hostname, we're going to make use of it quite soon.
Well, as a matter of fact, that's pretty much it for the docker swarm setup.
Seem like we're doing fine so far, let's keep up!
Zookeeper
---------
Before we start deploying Patroni services we need to set up DCS first which is Zookeeper in our case. I'm gonna go for the 3.4 version. From my experience, it works just fine.
Below are the full docker-compose config and some notes on details I find that it'd be reasonable to say a few words about.
docker-compose-zookeeper.yml
```
version: '3.7'
services:
zoo1:
image: zookeeper:3.4
hostname: zoo1
ports:
- 2191:2181
networks:
- patroni
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
deploy:
replicas: 1
placement:
constraints:
- node.hostname == floitet
restart_policy:
condition: any
zoo2:
image: zookeeper:3.4
hostname: zoo2
networks:
- patroni
ports:
- 2192:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zoo3:2888:3888
deploy:
replicas: 1
placement:
constraints:
- node.hostname == floitet
restart_policy:
condition: any
zoo3:
image: zookeeper:3.4
hostname: zoo3
networks:
- patroni
ports:
- 2193:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=0.0.0.0:2888:3888
deploy:
replicas: 1
placement:
constraints:
- node.hostname == floitet
restart_policy:
condition: any
networks:
patroni:
driver: overlay
attachable: true
```
DetailsThe important thing of course is to give every node its unique service name and published port. The hostname is preferably be set to the same as the service name.
```
zoo1:
image: zookeeper:3.4
hostname: zoo1
ports:
- 2191:2181
```
Notice how we list servers in this line, changing the service we bind depending on the service number. So for the first (zoo1) service server.1 is bound to 0.0.0.0, but for zoo2 it will be the server.2 accordingly.
```
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
```
This is how we control the deployment among nodes. As we have only one node, we set a constraint to this node for all services. When you have multiple nodes in your docker swarm cluster and want to spread services among all nodes - just replace node.hostname with the name of the desirable node (use 'docker node ls' command).
```
placement:
constraints:
- node.hostname == floitet
```
And the final thing we need to take care of is the network. We're going to deploy Zookeeper and Patroni clusters in one overlay network so that they could communicate with each other in an isolated environment using the service names.
```
networks:
patroni:
driver: overlay
// we need to mark this network as attachable
// so that to be able to connect patroni services to this network later on
attachable: true
```
Guess it's time to deploy the thing!
```
sudo docker stack deploy --compose-file docker-compose-zookeeper.yml patroni
```
Now let's check if the job is done right. The first step to take is this:
```
sudo docker service ls
gxfj9rs3po7z patroni_zoo1 replicated 1/1 zookeeper:3.4 *:2191->2181/tcp
ibp0mevmiflw patroni_zoo2 replicated 1/1 zookeeper:3.4 *:2192->2181/tcp
srucfm8jrt57 patroni_zoo3 replicated 1/1 zookeeper:3.4 *:2193->2181/tcp
```
And the second step is to actually ping the zookeeper service with the special Four-Letter-Command:
```
echo mntr | nc localhost 2191
// with the output being smth like this
zk_version 3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
zk_avg_latency 6
zk_max_latency 205
zk_min_latency 0
zk_packets_received 1745
zk_packets_sent 1755
zk_num_alive_connections 3
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count 16
zk_watch_count 9
zk_ephemerals_count 4
zk_approximate_data_size 1370
zk_open_file_descriptor_count 34
zk_max_file_descriptor_count 1048576
zk_fsync_threshold_exceed_count 0
```
Which means that the zookeeper node is responding and doing its job. You could also check the zookeeper service logs if you wish
```
docker service logs $zookeeper-service-id
// service-id comes from 'docker service ls' command.
// in my case it could be
docker service logs gxfj9rs3po7z
```
Okay, great! Now we have the Zookeeper cluster running. Thus it's time to move on and finally, get to the Patroni itself.
Patroni
-------
And here we are down to the main part of the tutorial where we handle the Patroni cluster deployment. The first thing I need to mention is that we actually need to build a Patroni image before we move forward. I'll try to be as detailed and precise as possible showing the most important parts we need to be aware of managing this task.
We're going to need multiple files to get the job done so you might want to keep them together. Let's create a 'patroni-test' directory and cd to it. Below we are going to discuss the files that we need to create there.
* **patroni.yml**
This is the main config file. The thing about Patroni is that we are able to set parameters from different places and here is one of them. This file gets copied into our custom docker image and thus updating it requires rebuilding the image. I personally prefer to store here parameters that I see as 'stable', 'permanent'. The ones I'm not planning to change a lot. Below I provide the very basic config. You might want to configure more parameters for a PostgreSQL engine, for example (i.e. max\_connections, etc). But for the test deployment, I think this one should be fine.
patroni.yml
```
scope: patroni
namespace: /service/
bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
postgresql:
use_pg_rewind: true
initdb:
- encoding: UTF8
- data-checksums
pg_hba:
- host replication all all md5
- host all all all md5
zookeeper:
hosts:
- zoo1:2181
- zoo2:2181
- zoo3:2181
postgresql:
data_dir: /data/patroni
bin_dir: /usr/lib/postgresql/11/bin
pgpass: /tmp/pgpass
parameters:
unix_socket_directories: '.'
tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
```
DetailsWhat we should be aware of is that we need to specify 'bin\_dir' correctly for Patroni to find Postgres binaries. In my use case, I have Postgres 11 so my directory looks like this: '/usr/lib/postgresql/11/bin'. This is the directory Patroni is going to be looking for an inside container. And 'data\_dir' is where the data will be stored inside the container. Later on, we'll bind it to the actual volume on our drive so that not to lose all the data if the Patroni cluster for some reason fails.
```
postgresql:
data_dir: /data/patroni
bin_dir: /usr/lib/postgresql/11/bin
```
I also list all the zookeeper servers here to feed them to patronictl later. Note that, If you don't specify it here we'll end up with a broken patroni command tool (patronictl). Also, I'd like to point out that we don't use IPs to locate zookeeper servers, but we feed Patroni with 'service names' instead. It's a feature of a docker swarm we are taking advantage of.
```
zookeeper:
hosts:
- zoo1:2181
- zoo2:2181
- zoo3:2181
```
* **patroni\_entrypoint.sh**
The next one is where most settings come from in my setup. It's a script that will be executed after the docker container is started.
patroni\_entrypoint.sh
```
#!/bin/sh
readonly CONTAINER_IP=$(hostname --ip-address)
readonly CONTAINER_API_ADDR="${CONTAINER_IP}:${PATRONI_API_CONNECT_PORT}"
readonly CONTAINER_POSTGRE_ADDR="${CONTAINER_IP}:5432"
export PATRONI_NAME="${PATRONI_NAME:-$(hostname)}"
export PATRONI_RESTAPI_CONNECT_ADDRESS="$CONTAINER_API_ADDR"
export PATRONI_RESTAPI_LISTEN="$CONTAINER_API_ADDR"
export PATRONI_POSTGRESQL_CONNECT_ADDRESS="$CONTAINER_POSTGRE_ADDR"
export PATRONI_POSTGRESQL_LISTEN="$CONTAINER_POSTGRE_ADDR"
export PATRONI_REPLICATION_USERNAME="$REPLICATION_NAME"
export PATRONI_REPLICATION_PASSWORD="$REPLICATION_PASS"
export PATRONI_SUPERUSER_USERNAME="$SU_NAME"
export PATRONI_SUPERUSER_PASSWORD="$SU_PASS"
export PATRONI_approle_PASSWORD="$POSTGRES_APP_ROLE_PASS"
export PATRONI_approle_OPTIONS="${PATRONI_admin_OPTIONS:-createdb, createrole}"
exec /usr/local/bin/patroni /etc/patroni.yml
```
Details: Important!Actually, the main point of even having this ***patroni\_entrypoint.sh*** is that Patroni won't simply start without knowing the IP address of its host. And for the host being a docker container we are in a situation where we somehow need to first get to know which IP was granted to the container and only then execute the Patroni start-up command. This indeed crucial task is handled here
```
readonly CONTAINER_IP=$(hostname --ip-address)
readonly CONTAINER_API_ADDR="${CONTAINER_IP}:${PATRONI_API_CONNECT_PORT}"
readonly CONTAINER_POSTGRE_ADDR="${CONTAINER_IP}:5432"
...
export PATRONI_RESTAPI_CONNECT_ADDRESS="$CONTAINER_API_ADDR"
export PATRONI_RESTAPI_LISTEN="$CONTAINER_API_ADDR"
export PATRONI_POSTGRESQL_CONNECT_ADDRESS="$CONTAINER_POSTGRE_ADDR"
```
As you can see, in this script, we take advantage of the 'Environment configuration' available for Patroni. It's another way aside from the patroni.yml config file, where we can set the parameters. 'PATRONI\_*RESTAPI\_*CONNECT*ADDRESS', 'PATRONI\_RESTAPI\_*LISTEN', 'PATRONI\_*POSTGRESQL*CONNECT\_ADDRESS' are those environment variables Patroni knows of and is applying automatically as setup parameters. And btw, they overwrite the ones set locally in patroni.yml
And here is another thing. Patroni docs do not recommend using superuser to connect your apps to the database. So we are going to this another user for connection which can be created with the lines below. It is also set through special env variables Patroni is aware of. Just replace 'approle' with the name you like to create the user with any name of your preference.
```
export PATRONI_approle_PASSWORD="$POSTGRES_APP_ROLE_PASS"
export PATRONI_approle_OPTIONS="${PATRONI_admin_OPTIONS:-createdb, createrole}"
```
And with this last line, where everything is ready for the start we execute Patroni with a link to patroni.yml
```
exec /usr/local/bin/patroni /etc/patroni.yml
```
* **Dockerfile**
As for the Dockerfile I decided to keep it as simple as possible. Let's see what we've got here.
Dockerfile
```
FROM postgres:11
RUN apt-get update -y\
&& apt-get install python3 python3-pip -y\
&& pip3 install --upgrade setuptools\
&& pip3 install psycopg2-binary \
&& pip3 install patroni[zookeeper] \
&& mkdir /data/patroni -p \
&& chown postgres:postgres /data/patroni \
&& chmod 700 /data/patroni
COPY patroni.yml /etc/patroni.yml
COPY patroni_entrypoint.sh ./entrypoint.sh
USER postgres
ENTRYPOINT ["bin/sh", "/entrypoint.sh"]
```
DetailsThe most important thing here is the directory we will be creating inside the container and its owner. Later, when we mount it to a volume on our hard drive, we're gonna need to take care of it the same way we do it here in Dockerfile.
```
// the owner should be 'postgres' and the mode is 700
mkdir /data/patroni -p \
chown postgres:postgres /data/patroni \
chmod 700 /data/patroni
...
// we set active user inside container to postgres
USER postgres
```
The files we created earlier are copied here:
```
COPY patroni.yml /etc/patroni.yml
COPY patroni_entrypoint.sh ./entrypoint.sh
```
And like it was mentioned above at the start we want to execute our entry point script:
```
ENTRYPOINT ["bin/sh", "/entrypoint.sh"]
```
That's it for handling the pre-requisites. Now we can finally build our patroni image. Let's give it a sound name 'patroni-test':
```
docker build -t patroni-test .
```
When the image is ready we can discuss the last but not least file we're gonna need here and it's the compose file, of course.
* **docker-compose-patroni.yml**
A well-configured compose file is something really crucial in this scenario, so let's pinpoint what we should take care of and which details we need to keep in mind.
docker-compose-patroni.yml
```
version: "3.4"
networks:
patroni_patroni:
external: true
services:
patroni1:
image: patroni-test
networks: [ patroni_patroni ]
ports:
- 5441:5432
- 8091:8091
hostname: patroni1
volumes:
- /patroni1:/data/patroni
environment:
PATRONI_API_CONNECT_PORT: 8091
REPLICATION_NAME: replicator
REPLICATION_PASS: replpass
SU_NAME: postgres
SU_PASS: supass
POSTGRES_APP_ROLE_PASS: appass
deploy:
replicas: 1
placement:
constraints: [node.hostname == floitet]
patroni2:
image: patroni-test
networks: [ patroni_patroni ]
ports:
- 5442:5432
- 8092:8091
hostname: patroni2
volumes:
- /patroni2:/data/patroni
environment:
PATRONI_API_CONNECT_PORT: 8091
REPLICATION_NAME: replicator
REPLICATION_PASS: replpass
SU_NAME: postgres
SU_PASS: supass
POSTGRES_APP_ROLE_PASS: appass
deploy:
replicas: 1
placement:
constraints: [node.hostname == floitet]
patroni3:
image: patroni-test
networks: [ patroni_patroni ]
ports:
- 5443:5432
- 8093:8091
hostname: patroni3
volumes:
- /patroni3:/data/patroni
environment:
PATRONI_API_CONNECT_PORT: 8091
REPLICATION_NAME: replicator
REPLICATION_PASS: replpass
SU_NAME: postgres
SU_PASS: supass
POSTGRES_APP_ROLE_PASS: appass
deploy:
replicas: 1
placement:
constraints: [node.hostname == floitet]
```
Details. also important The first detail that pops-up is the network thing we talked about earlier. We want to deploy the Patroni services in the same network as the Zookeeper services. This way 'zoo1', 'zoo2', 'zoo3' names we listed in ***patroni.yml*** providing zookeeper servers are going to work out for us.
```
networks:
patroni_patroni:
external: true
```
As for the ports, we have a database and API and both of them require its pair of ports.
```
ports:
- 5441:5432
- 8091:8091
...
environment:
PATRONI_API_CONNECT_PORT: 8091
// we need to make sure that we set Patroni API connect port
// the same with the one that is set as a target port for docker service
```
Of course, we also need to provide all the rest of the environment variables we kind of promised to provide configuring our entry point script for Patroni, but that's not it. There is an issue with a mount directory we need to take care of.
```
volumes:
- /patroni3:/data/patroni
```
As you can see '/data/patroni' we create in Dockerfile is mounted to a local folder we actually need to create. And not only create but also set the proper use and access mode just like in this example:
```
sudo mkdir /patroni3
sudo chown 999:999 /patroni3
sudo chmod 700 /patroni3
// 999 is the default uid for postgres user
// repeat these steps for each patroni service mount dir
```
With all these steps being done properly we are ready to deploy patroni cluster at last:
```
sudo docker stack deploy --compose-file docker-compose-patroni.yml patroni
```
After the deployment has been finished in the service logs we should see something like this indicating that the cluster is doing well:
```
INFO: Lock owner: patroni3; I am patroni1
INFO: does not have lock
INFO: no action. i am a secondary and i am following a leader
```
But it would be painful if we had no choice but to read through the logs every time we want to check on the cluster health, so let's dig into patronictl. What we need to do is to get the id of the actual container that is running any of the Patroni services:
```
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0090ce33a05 patroni-test:latest "bin/sh /entrypoint.…" 3 hours ago Up 3 hours 5432/tcp patroni_patroni1.1.tgjzpjyuip6ge8szz5lsf8kcq
...
```
And simply exec into this container with the following command:
```
sudo docker exec -ti a0090ce33a05 /bin/bash
// inside container
// we need to specify cluster name to list its memebers
// it is 'scope' parameter in patroni.yml ('patroni' in our case)
patronictl list patroni
// and oops
Error: 'Can not find suitable configuration of distributed configuration store\nAvailable implementations: exhibitor, kubernetes, zookeeper'
```
The thing is that patronictl requires patroni.yml to retrieve the info of zookeeper servers. And it doesn't know where did we put our config so we need to explicitly specify its path like so:
```
patronictl -c /etc/patroni.yml list patroni
// and here is the nice output with the current states
+ Cluster: patroni (6893104757524385823) --+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+----------+-----------+---------+---------+----+-----------+
| patroni1 | 10.0.1.93 | Replica | running | 8 | 0 |
| patroni2 | 10.0.1.91 | Replica | running | 8 | 0 |
| patroni3 | 10.0.1.92 | Leader | running | 8 | |
+----------+-----------+---------+---------+----+-----------+
```
HA Proxy
--------
Now everything seems to be set the way we wanted and we can easily access PostgreSQL on the leader service and perform operations we are meant to. But there is this last problem we should get rid of and this being: how do we know where is that leader at the runtime? Do we get to check every time and manually switch to another node when the leader crashes? That'd be extremely unpleasant, no doubt. No worries, this is the job for HA Proxy. Just like we did with Patroni we might want to create a separate folder for all the build/deploy files and then create the following files there:
* **haproxy.cfg**
The config file we're gonna need to copy into our custom haproxy image.
haproxy.cfg
```
global
maxconn 100
stats socket /run/haproxy/haproxy.sock
stats timeout 2m # Wait up to 2 minutes for input
defaults
log global
mode tcp
retries 2
timeout client 30m
timeout connect 4s
timeout server 30m
timeout check 5s
listen stats
mode http
bind *:7000
stats enable
stats uri /
listen postgres
bind *:5000
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server patroni1 patroni1:5432 maxconn 100 check port 8091
server patroni2 patroni2:5432 maxconn 100 check port 8091
server patroni3 patroni3:5432 maxconn 100 check port 8091
```
DetailsHere we specify ports we want to access the service from:
```
// one is for stats
listen stats
mode http
bind *:7000
// the second one for connection to postgres
listen postgres
bind *:5000
```
And simply list all the patroni services we have created earlier:
```
server patroni1 patroni1:5432 maxconn 100 check port 8091
server patroni2 patroni2:5432 maxconn 100 check port 8091
server patroni3 patroni3:5432 maxconn 100 check port 8091
```
And the last thing. This line showed below we need if we want to check our Patroni cluster stats from the Haproxy stats tool in the terminal from within a docker container:
```
stats socket /run/haproxy/haproxy.sock
```
* **Dockerfile**
In the Dockerfile it's not much to explain, I guess it's pretty self-explanatory.
Dockerfile
```
FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
RUN mkdir /run/haproxy &&\
apt-get update -y &&\
apt-get install -y hatop &&\
apt-get clean
```
* **docker-compose-haproxy.yml**
And the compose file for HaProxy looks this way and it's also quite an easy shot comparing to other services we've already covered:
docker-compose-haproxy.yml
```
version: "3.7"
networks:
patroni_patroni:
external: true
services:
haproxy:
image: haproxy-patroni
networks:
- patroni_patroni
ports:
- 5000:5000
- 7000:7000
hostname: haproxy
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == floitet]
```
After we have all the files created, let's build the image and deploy it :
```
// build
docker build -t haproxy-patroni
// deploy
docker stack deploy --compose-file docker-compose-haproxy.yml
```
When we got Haproxy up running we can exec into its container and check the Patroni cluster stats from there. It's done with the following commands:
```
sudo docker ps | grep haproxy
sudo docker exec -ti $container_id /bin/bash
hatop -s /var/run/haproxy/haproxy.sock
```
And with this command we'll get the output of this kind:
![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/452/1b0/deb/4521b0deba997c927cbba7586aadbb9c.png)To be honest, I personally prefer to check the status with patronictl but HaProxy is another option which is also nice to have in the administrating toolset. In the very beginning, I promised to show 3 ways to access cluster stats. So the third way of doing it is to use Patroni API directly, which is a cool way as It provides expanded, ample info.
Patroni API
-----------
Full detailed overview of its options you can find in Patroni docs and here I'm gonna quickly show the most common ones I use and how to use 'em in our docker swarm setup.
We won't be able to access any of the Patroni services APIs from outside of the 'patroni\_patroni' network we created to keep all our services together. So what we can do is to build a simple custom curl image to retrieve info in a human-readable format.
Dockerfile
```
FROM alpine:3.10
RUN apk add --no-cache curl jq bash
CMD ["/bin/sh"]
```
And then run a container with this image connected to the 'patroni\_patroni' network.
```
docker run --rm -ti --network=patroni_patroni curl-jq
```
Now we can call Patroni nodes by their names and get stats like so:
Node stats
```
curl -s patroni1:8091/patroni | jq
{
"patroni": {
"scope": "patroni",
"version": "2.0.1"
},
"database_system_identifier": "6893104757524385823",
"postmaster_start_time": "2020-11-15 19:47:33.917 UTC",
"timeline": 10,
"xlog": {
"received_location": 100904544,
"replayed_timestamp": null,
"replayed_location": 100904544,
"paused": false
},
"role": "replica",
"cluster_unlocked": false,
"state": "running",
"server_version": 110009
}
```
Cluster stats
```
curl -s patroni1:8091/cluster | jq
{
"members": [
{
"port": 5432,
"host": "10.0.1.5",
"timeline": 10,
"lag": 0,
"role": "replica",
"name": "patroni1",
"state": "running",
"api_url": "http://10.0.1.5:8091/patroni"
},
{
"port": 5432,
"host": "10.0.1.4",
"timeline": 10,
"role": "leader",
"name": "patroni2",
"state": "running",
"api_url": "http://10.0.1.4:8091/patroni"
},
{
"port": 5432,
"host": "10.0.1.3",
"lag": "unknown",
"role": "replica",
"name": "patroni3",
"state": "running",
"api_url": "http://10.0.1.3:8091/patroni"
}
]
}
```
Pretty much everything we can do with the Patroni cluster can be done through Patroni API, so if you want to get to know better the options available feel free to read official docs on this topic.
PostgreSQL Connection
---------------------
The same thing here: first run a container with the Postgres instance and then from within this container get connected.
```
docker run --rm -ti --network=patroni_patroni postgres:11 /bin/bash
// access to the concrete patroni node
psql --host patroni1 --port 5432 -U approle -d postgres
// access to the leader with haproxy
psql --host haproxy --port 5000 -U approle -d postgres
// user 'approle' doesn't have a default database
// so we need to specify one with the '-d' flag
```
Wrap-up
-------
Now we can experiment with the Patroni cluster as if it was an actual 3-node setup by simply scaling services. In my case when patroni3 happened to be the leader, I can go ahead and do this:
```
docker service scale patroni_patroni3=0
```
This command will disable the Patroni service by killing its only running container. Now I can make sure that failover has happened and the leader role moved to another service:
```
postgres@patroni1:/$ patronictl -c /etc/patroni.yml list patroni
+ Cluster: patroni (6893104757524385823) --+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+----------+-----------+---------+---------+----+-----------+
| patroni1 | 10.0.1.93 | Leader | running | 9 | |
| patroni2 | 10.0.1.91 | Replica | running | 9 | 0 |
+----------+-----------+---------+---------+----+-----------+
```
After I scale it back to "1", I'll get my 3-node cluster with patroni3, the former leader, back in shape but in a replica mode.
From this point, you are able to run your experiments with the Patroni cluster and see for yourself how it handles critical situations.
Outroduction
------------
As I promised, I'm going to provide a sample test script and instructions on how to approach it. So if you need something for a quick test-run, you are more than welcomed to read under a spoiler section. If you already have your own test scenarios in mind and don't need any pre-made solutions, just skip it without any worries.
Patroni cluster testSo for the readers who want to put their hands-on testing the Patroni cluster right away with something pre-made, I created [this script](https://pastebin.com/p23sp83L). It's super simple and I believe you won't have problems getting a grasp of it. Basically, it just writes the current time in the database through the haproxy gateway each second. Below I'm going to show step by step how to approach it and what was the outcome from the test-run on my local stand.
* **Step 1.**
Assume you've already downloaded the script from the link and put it somewhere on your machine. If not, do this preparation and follow up. From here we'll move on and create a docker container from an official Microsoft SDK image like so:
```
docker run --rm -ti --network=patroni_patroni -v /home/floitet/Documents/patroni-test-script:/home mcr.microsoft.com/dotnet/sdk /bin/bash
```
The important thing is that we get connected to the 'patroni\_patroni' network. And another crucial detail is that we want to mount this container to a directory where you've put the script. This way you can easily access it from within a container in the '/home' directory.
* **Step2.**
Now we need to take care of getting the only dll we are going to need for our script to compile. Standing in the '/home' directory let's create a new folder for the console app. I'm gonna call it 'patroni-test'. Then cd to this directory and run the following command:
```
dotnet new console
// and the output be like:
Processing post-creation actions...
Running 'dotnet restore' on /home/patroni-test/patroni-test.csproj...
Determining projects to restore...
Restored /home/patroni-test/patroni-test.csproj (in 61 ms).
Restore succeeded.
```
And from here we can add the package we'll be using as a dependency for our script:
```
dotnet add package npgsql
```
And after that simply pack the project:
```
dotnet pack
```
If everything went as expected you'll get 'Ngsql.dll' sitting here:
'patroni-test/bin/Debug/net5.0/Npgsql.dll'.
It is exactly the path we reference in our script so if yours differs from mine, you're gonna need to change it in the script.
And what we do next is just run the script:
```
dotnet fsi /home/patroni-test.fsx
// and get the output like this:
11/18/2020 22:29:32 +00:00
11/18/2020 22:29:33 +00:00
11/18/2020 22:29:34 +00:00
```
> Make sure you keep the terminal with the running script open
>
>
* **Step 3.**
Let's check the Patroni cluster to see where is the leader using patronictl, PatroniAPI, or HaProxy, either way, is fine. In my case the leader status was on 'patroni2':
```
+ Cluster: patroni (6893104757524385823) --+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+----------+-----------+---------+---------+----+-----------+
| patroni1 | 10.0.1.18 | Replica | running | 21 | 0 |
| patroni2 | 10.0.1.22 | Leader | running | 21 | |
| patroni3 | 10.0.1.24 | Replica | running | 21 | 0 |
+----------+-----------+---------+---------+----+-----------+
```
So what we need to get done at this point is to open another terminal and fail the leader node:
```
docker service ls | grep patroni
docker service scale $patroni2-id=0
```
After some time in the terminal with the script we'll see logs throwing error:
```
// let's memorize the time we got the last successfull insert
11/18/2020 22:33:06 +00:00
Error
Error
Error
```
If we check the Patroni cluster stats at this time we might see some delay though and patroni2 still indicating a healthy state running as a leader. But after some time it's going to fail and cluster, through a short stage of the leadership elections, will come to the following state:
```
+ Cluster: patroni (6893104757524385823) --+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+----------+-----------+---------+---------+----+-----------+
| patroni1 | 10.0.1.18 | Replica | running | 21 | 0 |
| patroni3 | 10.0.1.24 | Leader | running | 21 | |
+----------+-----------+---------+---------+----+-----------+
```
If we go back to our script output we should notice that the connection has finally recovered and the logs are as follows:
```
Error
Error
Error
11/18/2020 22:33:48 +00:00
11/18/2020 22:33:49 +00:00
11/18/2020 22:33:50 +00:00
11/18/2020 22:33:51 +00:00
```
* **Step 4.**
Let's go ahead and check if the database is in a proper state after a failover:
```
docker run --rm -ti --network=patroni_patroni postgres:11 /bin/bash
psql --host haproxy --port 5000 -U approle -d postgres
postgres=> \c patronitestdb
You are now connected to database "patronitestdb" as user "approle".
// I set the time a little earlier than the crash happened
patronitestdb=> select * from records where time > '22:33:04' limit 15;
time
-----------------
22:33:04.171641
22:33:05.205022
22:33:06.231735
// as we can see in my case it required around 42 seconds
// for connection to recover
22:33:48.345111
22:33:49.36756
22:33:50.374771
22:33:51.383118
22:33:52.391474
22:33:53.399774
22:33:54.408107
22:33:55.416225
22:33:56.424595
22:33:57.432954
22:33:58.441262
22:33:59.449541
```
**Summary**
From this little experiment, we can conclude that Patroni managed to serve its purpose. After the failure occurred and the leader was re-elected we managed to reconnect and keep working with the database. And the previous data is all present on the leader node which is what we expected. Maybe it could have switched to another node a little faster than in 42 seconds, but at the end of the day, it's not that critical.
I suppose we should consider our work with this tutorial finished. Hope it helped you figure out the basics of a Patroni cluster setup and hopefully it was useful. Thanks for your attention and let the Patroni guardian keep you and your data safe at all times! | https://habr.com/ru/post/527370/ | null | null | 5,638 | 52.09 |
29 August 2012 11:59 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
Jiangsu Sopo made a net profit of CNY7.47m in the same period a year earlier.
The company’s operating loss for the period was down by more than four times at CNY27.2m, according to the statement.
Jiangsu Sopo produced 52,400 tonnes of caustic soda from January to June this year, which represents 44.4% of its yearly target in 2012, down by 3.5% compared with the same period in 2011, according to the statement.
Furthermore, the company invested CNY28.5m in the construction of various projects in the first half of this year. An 80,000 tonne/year ion-exchange membrane caustic soda project cost CNY19.7m and it is 70% completed, according to the statement.
Jiangsu Sopo Chemical Industry Shareholding is a subsidiary of Jiangsu Sopo Group and its main products include caustic soda and bleaching powder.
( | http://www.icis.com/Articles/2012/08/29/9590649/chinas-jiangsu-sopo-swings-to-h1-net-loss-on-weak-demand.html | CC-MAIN-2014-10 | refinedweb | 154 | 68.97 |
The new features, bug fixes and improvements for PHP and the Web, and takes on the latest improvements in IntelliJ Platform.
New code style setting: blank lines before namespace
We’ve added a new code style setting to specify the minimum blank lines before namespace. Now you are able to tune this part according to your preferred code style. The default value is set to 1.
Align assignment now affects shorthand operators
Now the option Settings|Code Style|PHP|Wrapping and Braces|Assignment Statement|Align consecutive assignments takes into account shorthand operators.
“Download from…” option in deployment
On your request, we’ve implemented an option “Download from…” in deployment actions that allows choosing from which server file or folder should be downloaded. Before the introduction of this option, files could be downloaded only from a default deployment server.
New “Copy Type” and “Jump to Type Source” actions from the Debugger Variables View
We’ve added two new actions to Debugger Variables View: Jump to Type Source and Copy Type. Jump to Type Source allows you to navigate directly to the class of the current variable to inspect its code. While Copy Type action copies the type of the variable into the clipboard for later usage in your code or for sending it as a reference to your colleague.
See the full list of bug-fixes and improvements list in our issue tracker and the complete release notes.
Download PhpStorm 2017.1 EAP build 171.2152 for your platform from the project EAP page or click “Update” in your JetBrains Toolbox and please do report any bugs and feature request to our Issue Tracker.
Your JetBrains PhpStorm Team
The Drive to Develop | https://blog.jetbrains.com/phpstorm/2017/01/phpstorm-2017-1-eap-171-2152/ | CC-MAIN-2020-10 | refinedweb | 281 | 61.67 |
Prerequisites
- You must have an Amazon Web Services account ().
- You must have signed up to use the Alexa Site Thumbnail ().
- Assumes python 2.4 or later.
Running the Sample
- Extract the .zip file into a working directory.
- Edit the ThumbnailUtility.py file to include your Access Key ID and Secret Access Key.
- Open up the python interpreter and run the following (or you can include this in some app):
from ThumbnailUtility import *And for a list:
create_thumbnail('kelvinism.com', 'Large')from ThumbnailUtility import *
all_sites = ['kelvinism.com', 'alexa.com', 'amazon.com']
create_thumbnail_list(all_sites, 'Small')
Note: I think most people use Python (for doing web things) in a framework, so this code snippet doesn't include the mechanism of actually displaying the images, it just returns the image location. However, take a look in readme.txt for more examples of actually displaying the images and more advanced usage.
If you need help or run into a stumbling block, don't hesitate contacting me: kelvin [-at-] kelvinism.com | http://aws.amazon.com/code/Python/818 | CC-MAIN-2015-22 | refinedweb | 165 | 60.41 |
[[!img Logo]
[[!format rawhtml """
!html
"""]]:
* [[!format txt """
void good_function(void) {
if (a) { printf("Hello World!\n"); a = 0; } } """]]And this is wrong: [[!format txt """ void bad_function(void) { if (a) { printf("Hello World!\n"); a = 0; } } """]]
- Avoid unnecessary curly braces. Good code:
* [[!format txt """
if (!braces_needed) printf("This is compact and neat.\n"); """]]Bad code: [[!format txt """ if (!braces_needed) { printf("This is superfluous and noisy.\n"); } """]]
- Don't put the return type of a function on a separate line. This is good:
* [[!format txt """
int good_function(void) { } """]]and this is bad: [[!format txt """ int bad_function(void) { } """]]
- On function calls and definitions, don't put an extra space between the function name and the opening parenthesis of the argument list. This good:
- [[!format txt """ double sin(double x); """]]This bad: [[!format txt """:
* [[!format txt """
typedef enum pa_resample_method { / ... / } pa_resample_method_t; """]]
- No C++ comments please! i.e. this is good:
* [[!format txt """
/ This is a good comment / """]]and this is bad: [[!format txt """ //:
* [[!format txt """
void good_code(int a, int b) { pa_assert(a > 47); pa_assert(b != 0);
/ ... / } """]]Bad code: [[!format txt """ void bad_code(int a, int b) { pa_assert(a > 47 && b != 0);
}
"""]]
1. Errors are returned from functions as negative values. Success is returned as 0 or positive value.
1. Check for error codes on every system call, and every external library call. If you you are sure that calls like that cannot return an error, make that clear by wrapping it in
pa_assert_se(). i.e.:
* [[!format txt """
pa_assert_se(close(fd) == 0);
"""]]Please note that
pa_assert_se() is identical to
pa_assert(), except that it is not optimized away if NDEBUG is defined. (
se stands for side effect)
1. Every .c file should have a matching .h file (exceptions allowed)
1. In .c files we include .h files whose definitions we make use of with
#include <>. Header files which we implement are to be included with
#include "".
1. If
#include <> is used the full path to the file relative to
src/ should be used.
1..
1.:
[[!format txt """ A few of us had an IRC conversation about whether or not to localise PulseAudio's log messages. The suggestion/outcome is to add localisation only if all of the following points are fulfilled:
1) The message is of type "warning" or "error". (Debug and info levels are seldom exposed to end users.)
2) The translator is likely to understand the sentence. (Otherwise we'll have a useless translation anyway.)
3) It's at least remotely likely that someone will ever encounter it. (Otherwise we're just wasting translator's time.):
[[!format txt """ )) """]] | http://freedesktop.org/wiki/Software/PulseAudio/Documentation/Developer/CodingStyle/?action=SyncPages | CC-MAIN-2013-20 | refinedweb | 424 | 78.14 |
Lexical Dispatch in Python
A recent article on Ikke’s blog shows how to emulate a C switch statement using Python. (I’ve adapted the code slightly for the purposes of this note).
def handle_one(): return 'one' def handle_two(): return 'two' def handle_three(): return 'three' def handle_default(): return 'unknown' cases = dict(one=handle_one, two=handle_two, three=handle_three) for i in 'one', 'two', 'three', 'four': handler = cases.get(i, handle_default) print handler()
Here the
cases dict maps strings to functions and the subsequent switch is a simple matter of looking up and dispatching to the correct function. This is good idiomatic Python code. When run, it outputs:
one two three unknown
Here’s an alternative technique which I’ve sometimes found useful. Rather than build an explicit
cases dict, we can just use one of the dicts lurking behind the scenes — in this case the one supplied by the built-in
globals() function. Leaving the
handle_*() functions as before, we could write:
for i in 'one', 'two', 'three', 'four': handler = globals().get('handle_%s' % i, handle_default) print handler()
Globals() returns us a
dict mapping names in the current scope to their values. Since our handler functions are uniformly named, some string formatting combined with a simple dictionary look-up gets the required function.
A warning: it’s unusual to access objects in the global scope in this way, and in this particular case, the original explicit dictionary dispatch would be better. In other situations though, when the scope narrows to a class or a module, it may well be worth remembering that classes and modules behave rather like dicts which map names to values. The built-in
getattr() function can then be used as a function dispatcher. Here’s a class-based example:
PLAIN, BOLD, LINK, DATE = 'PLAIN BOLD LINK DATE'.split() class Paragraph(object): def __init__(self): self.text_so_far = '' def __str__(self): return self._do_tag(self.text_so_far, 'p') def _do_tag(self, text, tag): return '<%s>%s</%s>' % (tag, text, tag) def do_bold(self, text): return self._do_tag(text, 'b') def do_link(self, text): return '<a href="%s">%s</a>' % (text, text) def do_plain(self, text): return text def append(self, text, markup=PLAIN): handler = getattr(self, 'do_%s' % markup.lower()) self.text_so_far += handler(text) return self
Maybe not the most fully-formed of classes, but I hope you get the idea! Incidentally,
append() returns a reference to
self so clients can chain calls together.
>>> print Paragraph().append("Word Aligned", BOLD ... ).append(" is at " ... ).append("", LINK) <p><b>Word Aligned</b> is at <a href=""></a></p>
By the way, “Lexical Dispatch” isn’t an official term or one I’ve heard before. It’s just a fancy way of saying “call things based on their name” — and the term “call by name” has already been taken | http://wordaligned.org/articles/lexical-dispatch-in-python | CC-MAIN-2015-32 | refinedweb | 466 | 62.58 |
Say I am reading an xml file using SAX Parser : Here is the format of xml file
<BookList> <BookTitle_1> C++ For Dummies </BookTitle_1> <BookAurthor_1> Charles </BookAuthor> <BookISBN_1> ISBN -1023-234 </BookISBN_2> <BookTitle_2> Java For Dummies </Booktitle_2> <BookAuthor_2> Henry </BookAuthor_2> <BookISN_2> ISBN - 231-235 </BookISN_2> </BookList>
And then I have class call Books:
public class Book { private String Name; private String Author; private String ISBN; public void SetName(String Title) { this.Name = Title; } public String getName() { return this.Name; } public void setAuthor(String _author) { this.Author = _author; } public String getAuthor() { }
And then I have an ArrayList class of type Book as such:
public class BookList { private List <Book> books ; BookList(Book _books) { books = = new ArrayList<Book>(); books.add(_books); } }
And lastly I have the main class,
where I parse and read the books from the xml file. Currently when I parse an xml file and say I read the tag <BookTitle_1> I call the SetName() and then call BookList() to create a new arraylist for the new book and add it to the arraylist.
But how can I create a dynamic setters and getters method so that whenever a new book title is read it calls the new setter and getter method to add it to arraylist.
Currently my code over writes the previously stored book and prints that book out. I have heard there is something called reflection. If I should use reflection, please can some one show me an example?
Thank you.
This post has been edited by jon.kiparsky: 28 July 2013 - 04:37 PM
Reason for edit:: Recovered content from closed version of same thread | http://www.dreamincode.net/forums/topic/325810-how-can-i-create-dynamic-setters-and-getters-accessors-and-mutators/page__pid__1880640__st__0 | CC-MAIN-2016-07 | refinedweb | 268 | 66.78 |
Alex Karasulu wrote:
> Hi all,
>
> On Jan 16, 2008 5:26 AM, Emmanuel Lecharny <[email protected]
> <mailto:[email protected]>> wrote:
>
> Hi Alex, PAM,
>
> if we are to go away from JNDI, Option 2 is out of question.
> Anyway, the
> backend role is to store data, which has nothing in common with
> Naming,
> isn't it ?
>
>
> Well if you mean the tables yes you're right it has little to do with
> javax.naming except for one little thing that keeps pissing me off
> which is the fact that normalization is needed by indices to function
> properly. And normalizers generate NamingExceptions. If anyone has
> any idea on how best to deal with this please let me know.
Well, indices should have been normalized *before* being stored, and
this is what we are working on with the new ServerEntry/Attribute/Value
stuff, so your problem will vanish soon ;) (well, not _that_ soon, but ...)
>
>
> and I
> see no reason to depend on NamingException when we have nothing to do
> with LDAP.
>
>
> We still have some residual dependence on LDAP at higher levels like
> when we talk about Index or Partition because of the nature of their
> interfaces. Partitions are still partial to the LDAP namespace or we
> would be screwed. They still need to be tailored to the LDAP
> namespace. There are in my mind 2 layers of abstractions and their
> interfaces which I should probably clarify
>
> Partition Abstraction Layer
> --------------------------------
> o layer between the server and the entry store/search engine
> (eventually we might separate search from stores)
> o interfaces highly dependent on the LDAP namespace
>
> BTree Partition Layer
> -------------------------
> o layer between an abstract partition implementation with concrete
> search engine which uses a concrete search engine based on a two
> column db design backed by BTree (or similar primitive data structures)
> o moderately dependent on the namespace
>
> Note the BTree Partition Layer is where we have interfaces defined
> like Table, and Index. These structures along with Cursors are to be
> used by this default search engine to conduct search. We can then
> swap out btree implementations between in memory, JE and JDBM easily
> without messing with the search algorithm.
This is where things get tricky... But as soon as we can clearly define
the different layers without having some kind of overlap, then we will
be done. The pb with our current impl is that we are mixing the search
engine with the way data are stored. Your 'cursor' implementation will
help a lot solving this problem.
--
--
cordialement, regards,
Emmanuel Lécharny
directory.apache.org | http://mail-archives.apache.org/mod_mbox/directory-dev/200801.mbox/%[email protected]%3E | CC-MAIN-2014-42 | refinedweb | 423 | 59.13 |
A very practical version of an Action Menu Item (AMI) is a variant that will run an application or a script on your local computer. For this to work you need to set up a connection between your browser and the script or application you wish to run. This link is called a custom browser protocol.
You may want to set up a type of link where if a user clicks on it, it will launch the [foo] application. Instead of having ‘http’ as the prefix, you need to designate a custom protocol, such as ‘foo’. Ideally you want a link that looks like:
foo://some/info/here.
The operating system has to be informed how to handle protocols. By default, all of the current operating systems know that ‘http’ should be handled by the default web browser, and ‘mailto’ should be handled by the default mail client. Sometimes when applications are installed, they register with the OS and tell it to launch the applications for a specific protocol.
As an example, if you install RV, the application registers
rvlink:// with the OS and tells it that RV will handle all
rvlink:// protocol requests to show an image or sequence in RV. So when a user clicks on a link that starts with
rvlink://, as you can do in Shotgun, the operating system will know to launch RV with the link and the application will parse the link and know how to handle it.
See the RV User Manual for more information about how RV can act as a protocol handler for URLs and the “rvlink” protocol.
Registering a protocol
Registering a protocol on Windows
On Windows, registering protocol handlers involves modifying the Windows Registry. Here is a generic example of what you want the registry key to look like:
HKEY_CLASSES_ROOT foo (Default) = "URL:foo Protocol" URL Protocol = "" shell open command (Default) = "foo_path" "%1"
The target URL would look like:
foo://host/path...
Note: For more information, please see.
Windows QT/QSetting example
If the application you are developing is written using the QT (or PyQT / PySide) framework, you can leverage the QSetting object to manage the creation of the registry keys for you.
This is what the code looks like to automatically have the application set up the registry keys:
// cmdLine points to the foo path. //Add foo to the Os protocols and set foobar to handle the protocol QSettings fooKey("HKEY_CLASSES_ROOT\\foo", QSettings::NativeFormat); mxKey.setValue(".", "URL:foo Protocol"); mxKey.setValue("URL Protocol", ""); QSettings fooOpenKey("HKEY_CLASSES_ROOT\\foo\\shell\\open\\command", QSettings::NativeFormat); mxOpenKey.setValue(".", cmdLine);
Windows example that starts a Python script via a Shotgun AMI
A lot of AMIs that run locally may opt to start a simple Python script via the Python interpreter. This allows you to run simple scripts or even apps with GUIs (PyQT, PySide or your GUI framework of choice). Let’s look at a practical example that should get you started in this direction.
Step 1: Set up the custom :// protocol to launch the
python interpreter with the first argument being the script
sgTriggerScript.py and the second argument being
%1. It is important to understand that
%1 will be replaced by the URL that was clicked in the browser or the URL of the AMI that was invoked. This will become the first argument to your Python script.
Note: You may need to have full paths to your Python interpreter and your Python script. Please adjust accordingly.
Step 2: Parse the incoming URL in your Python script
In your script you will take the first argument that was provided, the URL, and parse it down to its components in order to understand the context in which the AMI was invoked. We’ve provided some simple scaffolding that shows how to do this in the following code.
Python script
import sys import urlparse import pprint def main(args): # Make sure we have only one arg, the URL if len(args) != 1: return 1 # Parse the URL: protocol, fullPath = args[0].split(":", 1) path, fullArgs = fullPath.split("?", 1) action = path.strip("/") args = fullArgs.split("&") params = urlparse.parse_qs(fullArgs) # This is where you can do something productive based on the params and the # action value in the URL. For now we'll just print out the contents of the # parsed URL. fh = open('output.txt', 'w') fh.write(pprint.pformat((action, params))) fh.close() if __name__ == '__main__': sys.exit(main(sys.argv[1:]))
Step 3: Connect the:// URL which will be redirected to your script via the registered custom protocol.
In the
output.txt file in the same directory as your script you should now see something like this:
('processVersion', {'cols': ['code', 'image', 'entity', 'sg_status_list', 'user', 'description', 'created_at'], 'column_display_names': ['Version Name', 'Thumbnail', 'Link', 'Status', 'Artist', 'Description', 'Date Created'], 'entity_type': ['Version'], 'ids': ['6933,6934,6935'], 'page_id': ['4606'], 'project_id': ['86'], 'project_name': ['Test'], 'referrer_path': ['/detail/HumanUser/24'], 'selected_ids': ['6934'], 'server_hostname': ['patrick.shotgunstudio.com'], 'session_uuid': ['9676a296-7e16-11e7-8758-0242ac110004'], 'sort_column': ['created_at'], 'sort_direction': ['asc'], 'user_id': ['24'], 'user_login': ['shotgun_admin'], 'view': ['Default']})
Possible variants
By varying the keyword after the
// part of the URL in your AMI, you can change the contents of the
action variable in your script, all the while keeping the same
shotgun:// protocol and registering only a single custom protocol. Then, based on the content of the
action variable and the contents of the parameters, your script can understand what the intended behavior should be.
Using this methodology you could open applications, upload content via services like FTP, archive data, send email, or generate PDF reports.
Registering a protocol on OSX
To register a protocol on OSX you need to create a .app bundle that is configured to run your application or script.
Start by writing the following script in the AppleScript Script Editor:
on open location this_URL do shell script "sgTriggerScript.py '" & this_URL & "'" end open location
Pro tip: To ensure you are running Python from a specific shell, such as tcsh, you can change the do shell script for something like the following:
do shell script "tcsh -c \"sgTriggerScript.py '" & this_URL & "'\""
In the Script Editor, save your short script as an “Application Bundle”.
Find the saved Application Bundle, and Open Contents. Then, open the info.plist file and add the following to the plist dict:
<key>CFBundleIdentifier</key> <string>com.mycompany.AppleScript.Shotgun</string> <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLName</key> <string>Shotgun</string> <key>CFBundleURLSchemes</key> <array> <string>shotgun</string> </array> </dict> </array>
You may want to change the following three strings:
com.mycompany.AppleScript://, the
.app bundle will respond to it and pass the URL over to your Python script. At this point the same script that was used in the Windows example can be used and all the same possibilities apply.
Registering a protocol on Linux
Use the following code:
gconftool-2 -t string -s /desktop/gnome/url-handlers/foo/command 'foo "%s"' gconftool-2 -s /desktop/gnome/url-handlers/foo/needs_terminal false -t bool gconftool-2 -s /desktop/gnome/url-handlers/foo/enabled true -t bool
Then use the settings from your local GConf file in the global defaults in:
/etc/gconf/gconf.xml.defaults/%gconf-tree.xml
Even though the change is only in the GNOME settings, it also works for KDE. Firefox and GNU IceCat defer to gnome-open regardless of what window manager you are running when it encounters a prefix it doesn’t understand (such as
foo://). So, other browsers, like Konqueror in KDE, won’t work under this scenario.
See for more information on setting up protocol handlers for Action Menu Items in Ubuntu. | https://support.shotgunsoftware.com/hc/en-us/articles/219031308-Launching-Applications-Using-Custom-Browser-Protocols | CC-MAIN-2020-24 | refinedweb | 1,259 | 53.21 |
#include <sys/conf.h> #include <sys/ddi.h> #include <sys/sunddi.h> int ddi_dev_is_sid(dev_info_t *dip);
Solaris DDI specific (Solaris DDI).
A pointer to the device's dev_info structure.
The ddi_dev_is_sid() function tells the caller whether the device described by dip is self-identifying, that is, a device that can unequivocally tell the system that it exists. This is useful for drivers that support both a self-identifying as well as a non-self-identifying variants of a device (and therefore must be probed).
Device is self-identifying.
Device is not self-identifying.
The ddi_dev_is_sid() function can be called from user, interrupt, or kernel context.
1 ... 2 int 3 bz_probe(dev_info_t *dip) 4 { 5 ... 6 if (ddi_dev_is_sid(dip) == DDI_SUCCESS) { 7 /* 8 * This is the self-identifying version (OpenBoot). 9 * No need to probe for it because we know it is there. 10 * The existence of dip && ddi_dev_is_sid() proves this. 11 */ 12 return (DDI_PROBE_DONTCARE); 13 } 14 /* 15 * Not a self-identifying variant of the device. Now we have to 16 * do some work to see whether it is really attached to the 17 * system. 18 */ 19 ...
probe(9E) Writing Device Drivers for Oracle Solaris 11.2 | http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-dev-is-sid-9f.html | CC-MAIN-2016-40 | refinedweb | 195 | 60.41 |
Elvis Chitsungo1,817 Points
Can someone help.
Can someone help
public class Spaceship{ public String shipType; public String getShipType() { return shipType; } public void setShipType(String shipType) { this.shipType = shipType; }
6 Answers
Calin Bogdan14,623 Points
There is a ‘}’ missing at the end of the file, the one that wraps the class up.
Calin Bogdan14,623 Points
One issue would be that shipType property is public instead of being private.
Elvis Chitsungo1,817 Points
U mean changing this ''public void'' to private void.
Elvis Chitsungo1,817 Points
After changing to private in now getting below errors.
./Spaceship.java:9: error: reached end of file while parsing } ^ JavaTester.java:138: error: shipType has private access in Spaceship ship.shipType = "TEST117"; ^ JavaTester.java:140: error: shipType has private access in Spaceship if (!tempReturn.equals(ship.shipType)) { ^ JavaTester.java:203: error: shipType has private access in Spaceship ship.shipType = "TEST117"; ^ JavaTester.java:205: error: shipType has private access in Spaceship if (!ship.shipType.equals("TEST249")) { ^ 5 errors
Calin Bogdan14,623 Points
That's because you have to use ship.getShipType() to get its value and ship.setShipType(type) to set its value. It is recommended to do so to ensure encapsulation and scalability.
Here are some solid reasons for using encapsulation in your code:
- Encapsulation of behaviour.
Elvis Chitsungo1,817 Points
This is now becoming more complicated, i don't know much about java. But with the first one i was only having one error. i am referring to my first post. Let me re-type it again
./Spaceship.java:9: error: reached end of file while parsing } ^ 1 error
Elvis Chitsungo1,817 Points
Calin Bogdan thanks, u have managed to give me the best answer.
Calin Bogdan14,623 Points
Calin Bogdan14,623 Points
This is correct:
The only change you have to make is public String shipType -> private String shipType.
Fields (shipType) have to be private, their accessors (getShipType, setShipType) have to be public. | https://teamtreehouse.com/community/can-someone-help-13 | CC-MAIN-2020-40 | refinedweb | 321 | 60.01 |
evalFunction
This example shows how to evaluate the expression
x+y in Python®. To evaluate an expression, pass a Python
dict value for the
globals namespace parameter.
Read the help for eval.
py.help('eval')
Help on built-in function eval in module builtins:.
Create a Python
dict variable for the
x and
y values.
workspace = py.dict(pyargs('x',1,'y',6))
workspace = Python dict with no properties. {'y': 6.0, 'x': 1.0}
Evaluate the expression.
res = py.eval('x+y',workspace)
res = 7
Add two numbers without assigning variables. Pass an empty
dict value for the
globals parameter.
res = py.eval('1+6',py.dict)
res = Python int with properties: denominator: [1×1 py.int] imag: [1×1 py.int] numerator: [1×1 py.int] real: [1×1 py.int] 7 | https://nl.mathworks.com/help/matlab/matlab_external/call-python-eval-function.html | CC-MAIN-2019-35 | refinedweb | 134 | 63.86 |
Sharepoint foundation 2010 webpart jobs
.. experience
urgent requirement for SharePoint developer who knows the office 365
..,
...speakers are welcomed ! The work can be
I am looking to learn advanced MS access
I am looking for a help with my database with Query / Report /etc"
My Windows Small Business Server 2011 needs a Maintenance/Update. Update Exchange 2010 with SP2 / Check for Errors Update SSL certificates. Check System for Errors.
...useful for website Support the web site for next 3 months after delivery Set up a preliminary project plan in qdPM ( Although this is small project but this would be a foundation for future collaboration ) Setup Matomo and configure the dashboard for web data analytics What is not our purpose or goal of this web site: Ecommerce Blog Product
..
I have MS Access application file(*.accdb). There are many tables on this file. Freelancer should convert these tables into mysql database. Please contact me with writing proper solution on bid text. Thanks.
We need foundation details sketch to be drafted in AutoCAD. The drawing should be done accurately.
Hi, We have hotel system so we need to modify us some forms and reports, our system developed VB.net 2010 and SQL Server 2008.
Zlecenie polega na przepisaniu na nowo kodu JS i CSS w celu
We did an interview with the trevor noah foundation that has been transcribed and needs to be written into a captivating article that attracts the reader and references his autobiography "born a crime" in regard to his mission
Microsoft Access Advanced 2010
.. at the start and end of each month. By start I
I would like to isolate bootstrap v3.3.7 to avoid css conflicts with SharePoint online. New namespace would be 'bootstrap-htech'. and user will see a result according to his answers. For example if user have 40 point from his answers he will get Answer A, if it's more than 60 he will get "Answer B". Please contact me if you can make...
We are a small construction company interested in having someone work with us to set up our SharePoint site, give instruction concerning use, & do some programming for custom functionality. The right person must be an expert in SharePoint design & able to communicate slowly & clearly.
I am trying to find someone that can create a brochure for me. I will need it base of my website which is [login to view URL] Please let me know if you are available to do it. Thank you!
need help on sharepoint small requirement.
I would like to Advanced Microsoft Access 2010/2013 ( report, Query, Search)
Sharepoint expertise... Look for a proposal on Sharepoint applications All default application from Microsoft migration to 2016. One has to list the features and functionalities of each module. Migration steps we plan to do when we migrate from sharepoint 2013 to sharepoint 2016. My price is 50$.
...com/watch?v=YAyc2bYRYP4 Background:
I would like the three documents: 1. 26.02.2015 Meditsiiniseadmed...At this URL: [login to view URL] Converted to Microsoft Excel format - so that they can be opened in Microsoft Excel 2010. Please start your response with the word "Rosebud", so that I know you have read what is required.
...existing emails with the same originator and titles and automatically merge incoming emails as they come in. Requirements and limitation: - Project must work on Outlook 2010 and Outlook 2016. - Must be develop using Yeoman, CANNOT be develop using Visual Studio. - Build as custom Outlook Add-in. - The Add-in allows two modes (can can activated.
Preaching and Teaching self and others. First Prize in Poetry Competition, ManMeet 2010 at IIM-Bangalore Paper presented at Annamalai University, sponsored by UGC, entitled “Innovative Time Management”
For Police Dept. Heavy on the collaboration/social media aspect. Announcements, Calendars, Outlook email (if possible), Workflows, Projects, Tableau Dashboards (already developed, just display), possibly Yammer [login to view URL]
Fiberglass pole calculations and foundation drawings.
...Your server will not need to support any methods beyond GET, although there is extra credit available for supporting other methods. The project template provides the basic foundation of your server in C++ and will allow you to focus more on the technical systems programming aspect of this lab, rather than needing to come up with a maintainable design
I need a developer to recreate the following assessment and build it...[login to view URL] I will provide the text and the maturity model categories and the method for scoring (e.g. 2/5). Must be able to set up foundation so that we can add maturity categories, questions and scores based on answers to those questions.. | https://www.freelancer.com/work/sharepoint-foundation-2010-webpart/ | CC-MAIN-2018-22 | refinedweb | 773 | 64.41 |
Synopsis edit
-
- lassign list varName ?varName ...?
Documentation edit
- official reference
- TIP 57
- proposed making the TclX lassign command a built-in Tcl command
Description editlassign assigns values from a list to the specified variables, and returns the remaining values. For example:
set end [lassign {1 2 3 4 5} a b c]will set $a to 1, $b to 2, $c to 3, and $end to 4 5.In lisp parlance:
set cdr [lassign $mylist car]The k trick is sometimes used with lassign to improve performance by causing the Tcl_Obj hlding $mylist to be unshared so that it can be re-used to hold the return value of lassign:
set cdr [lassign $mylist[set mylist {}] car]If there are more varNames than there are items in the list, the extra varNames are set to the empty string:
% lassign {1 2} a b c % puts $a 1 % puts $b 2 % puts $c %In Tcl prior to 8.5, foreach was used to achieve the functionality of lassign:
foreach {var1 var2 var3} $list break
DKF: The foreach trick was sometimes written as:
foreach {var1 var2 var3} $list {}This was unwise, as it would cause a second iteration (or more) to be done when $list contains more than 3 items (in this case). Putting the break in makes the behaviour predictable.
Example: Perl-ish shift editDKF cleverly points out that lassign makes a Perl-ish shift this easy:
proc shift {} { global argv set argv [lassign $argv v] return $v }On the other hand, Hemang Lavana observes that TclXers already have lvarpop ::argv, an exact synonym for shift.On the third hand, RS would use our old friend K to code like this:
proc shift {} { K [lindex $::argv 0] [set ::argv [lrange $::argv[set ::argv {}] 1 end]] }Lars H: Then I can't resist doing the above without the K:
proc shift {} { lindex $::argv [set ::argv [lrange $::argv[set ::argv {}] 1 end]; expr 0] }
Default Value editFM: here's a quick way to assign with default value, using apply:
proc args {spec list} { apply [list $spec [list foreach {e} $spec { uplevel 2 [list set [lindex $e 0] [set [lindex $e 0]]] }]] {*}$list } set L {} args {{a 0} {b 0} {c 0} args} $LAMG: Clever. Here's my version, which actually uses lassign, plus it matches lassign's value-variable ordering. It uses lcomp for brevity.
proc args {vals args} { set vars [lcomp {$name} for {name default} inside $args] set allvals "\[list [join [lcomp {"\[set [list $e]\]"} for e in $vars]]\]" apply [list $args "uplevel 2 \[list lassign $allvals $vars\]"] {*}$vals }Without lcomp:
proc args {vals args} { lassign "" scr vars foreach varspec $args { append scr " \[set [list [lindex $varspec 0]]\]" lappend vars [lindex $varspec 0] } apply [list $args "uplevel 2 \[list lassign \[list$scr\] $vars\]"] {*}$vals }This code reminds me of the movie "Inception" [1]. It exists, creates itself, and operates at and across multiple levels of interpretation. There's the caller, there's [args], there's [apply], then there's the [uplevel 2] that goes back to the caller. The caller is the waking world, [args] is the dream, [apply] is its dream-within-a-dream, and [uplevel] is its dream-within-a-dream that is used to implant an idea (or variable) into the waking world (the caller). And of course, the caller could itself be a child stack frame, so maybe reality is just another dream! ;^)Or maybe this code is a Matryoshka nesting doll [2] whose innermost doll contains the outside doll. ;^)Okay, now that I've put a cross-cap in reality [3], let me demonstrate how [args] is used:
args {1 2 3} a b c ;# a=1 b=2 c=3 args {1 2} a b {c 3} ;# a=1 b=2 c=3 args {} {a 1} {b 2} {c 3} ;# a=1 b=2 c=3 args {1 2 3 4 5} a b c args ;# a=1 b=2 c=3 args={4 5}FM: to conform to the AMG (and lassign) syntax.
proc args {values args} { apply [list $args [list foreach e $args { uplevel 2 [list set [lindex $e 0] [set [lindex $e 0]]] }]] {*}$values }both versions seem to have the same speed.PYK, 2015-03-06, wonders why lassign decided to mess with variable values that were already set, preventing default values from being set beforehand:
#warning, hypothetical semantics set color green lassign 15 size color set color ;# -> greenAMG: I'm pretty sure this behavior is imported from [foreach] which does the same thing. [foreach] is often used as a substitute for [lassign] on older Tcl.So, what to do when there are more variable names than list elements? I can think of four approaches:
- Set the extras to empty string. This is current [foreach] and [lassign] behavior.
- Leave the extras unmodified. This is PYK's preference.
- Unset the extras if they currently exist. Their existence can be tested later to see if they got a value.
- Throw an error. This is what Brush proposes, but now I may be leaning towards PYK's idea.
Gotcha: Ambiguity List Items that are the Empty String editCMcC 2005-11-14: I may be just exceptionally grumpy this morning, but the behavior of supplying default empty values to extra variables means you can't distinguish between a trailing var with no matching value, and one with a value of the empty string. Needs an option, -greedy or something, to distinguish between the two cases. Oh, and it annoys me that lset is already taken, because lassign doesn't resonate well with set.Kristian Scheibe: I agree with CMcC on both counts - supplying a default empty value when no matching value is provided is bad form; and lset/set would have been better than lassign/set. However, I have a few other tweaks I would suggest, then I'll tie it all together with code to do what I suggest.First, there is a fundamental asymmetry between the set and lassign behaviors: set copies right to left, while lassign goes from left to right. In fact, most computer languages use the idiom of right to left for assignment. However, there are certain advantages to the left to right behavior of lassign (in Tcl). For example, when assigning a list of variables to the contents of args. Using the right to left idiom would require eval.Still, the right-to-left behavior also has its benefits. It allows you to perform computations on the values before performing the assignment. Take, for example, this definition of factorial (borrowed from Tail call optimization):
proc fact0 n { set result 1. while {$n > 1} { set result [expr {$result * $n}] set n [expr {$n - 1}] } return $result }Now, with lassign as currently implemented, we can "improve" this as follows:
proc fact0 n { set result 1. while {$n > 1} { lassign [list [expr {$result * $n}] [expr {$n - 1}]] result n } return $result }I'm hard-pressed to believe that this is better. However, if we changed lassign to be lassign vars args, we can write this as:
proc fact0 n { set result 1. while {$n > 1} { lassign {result n} [expr {$result * $n}] [expr {$n - 1} ] } return $result }To my eye, at least, this is much more readable.So, I suggest that we use two procedures: lassign and lassignr (where "r" stands for "reverse"). lassign would be used for the "standard" behavior: right to left. lassignr would then be used for left to right. This is backwards from the way it is defined above for TclX and Tcl 8.5. Nonetheless, this behavior aligns better with our training and intuition.Also, this provides a couple of other benefits. First, the parallel to set is much more obvious. lassign and set both copy from right to left (of course, we are still left with the asymmetry in their names - I'll get to that later). And, we can now see why assigning an empty string to a variable which doesn't have a value supplied is bad form; this is not what set does! If you enter set a you get the value of $a, you don't assign the empty string to a. lassign should not either. If you want to assign the empty string using set you would enter:
set a {}With lassign, you would do something similar:
lassign {a b c} 1 {}Here, $a gets 1, $b gets the empty string, and $c is not touched. This behavior nicely parallels that of set, except that set returns the new value, and lassign returns the remaining values. So, let's take another step in that direction; we'll have lassign and lassignr return the "used" values instead.But this destroys a nice property of lassign. Can we recover that property? Almost. We can do what proc does; we can use the "args" variable name to indicate a variable that sucks up all the remaining items. So, now we get:
lassign {a b args} 1 2 3 4 5 6$a gets 1, $b gets 2, and args gets 3 4 5 6. Of course, we would make lassignr work similarly:
lassignr {1 2 3 4 5 6} a b argsBut, now that we have one of the nice behaviors of the proc "assignment", what about that other useful feature: default values? We can do that as well. So, if a value is not provided, then the default value is used:
lassign {a {b 2}} one$b gets the value 2. This also provides for the assignment of an empty list to a variable if the value is not provided. So, those who liked that behavior can have their wish as well:
lassign {a {b {}}} oneBut simple defaults are not always adequate. This only provides for constants. If something beyond that is required, then explicit lists are needed. For example:
lassign [list a [list b $defaultb]] oneThis gets to be ugly, so we make one more provision: we allow variable references within the defaults:
lassign {a {b $defaultb}} oneNow, this really begins to provide the simplicity and power that we should expect from a general purpose utility routine. And, it parallels other behaviors within Tcl (proc and set) well so that it feels natural.But we're still left with this lassign/set dichotomy. We can't rename lassign to be lset without potentially breaking someone's code, But notice that lassign now provides features that set does not. So, instead, let's create an assign procedure that provides these same features, but only for a single value:
assign {x 3} 7Sets x to 7. If no value is provided, x will be 3.So, we now have three functions, assign, lassign, and lassignr, that collectively provide useful and powerful features that, used wisely, can make your code more readable and maintainable. You could argue that you only "need" one of these (pick one) - the others are easily constructed from whichever is chosen. However, having all three provides symmetry and flexibility.I have provided the definitions of these functions below. The implementation is less interesting than the simple power these routines provide. I'm certain that many of you can improve these implementations. And, if you don't like my rationale on the naming of lassignr; then you can swap the names. It's easy to change other aspects as well; for example, if you still want lassign to return the unused values, it's relatively easy to modify these routines.
proc assign {var args} { if {[llength $var] > 1} { uplevel set $var } uplevel set [lindex $var 0] $args } proc lassign {vars args} { if { ([lindex $vars end] eq "args") && ([ llength $args] > [llength $vars])} { set last [expr {[llength $vars] - 1}] set args [lreplace $args $last end [lrange $args $last end]] } #This is required so that we can distinguish between the value {} and no #value foreach val $args {lappend vals [list $val]} foreach var $vars val $vals { lappend res [uplevel assign [list $var] $val] } return $res } proc lassignr {vals args} { uplevel lassign [list $args] $vals }slebetman: KS, your proposal seems to illustrate that you don't get the idea of lassign. For several years now I have used my own homegrown proc, unlist, that has the exact same syntax and semantics of lassign. The semantics behind lassign is not like set at all but more like scan where the semantics in most programming languages (at least in C and Python) is indeed assignment from left to right. The general use of a scanning function like lassign is that given an opaque list (one that you did not create) split it into individual variables.If you really understand the semantics lassign was trying to achieve then you wouldn't have proposed your:
lassign vars argsTo achieve the semantics of lassign but with right to left assignment you should have proposed:
lassign vars listOf course, your proposal above can work with Tcl8.5 using {*}:
lassign {var1 var2 var3} {*}$listBut that means for 90% of cases where you would use lassign you will have to also use {*}. Actually Tcl8.4 already has a command which does what lassign is supposed to do but with a syntax that assigns from right to left: foreach. Indeed, my home-grown unlist is simply a wrapper around foreach as demonstrated by sbron above. With 8.4, if you want lassign-like functionality you would do:
foreach {var1 var2 var3} $list {}Kristian Scheibe: slebetman, you're right, I did not get that the semantics of lassign (which is mnemonic for "list assign" should match those of scan and not set (which is a synonym for assign). Most languages refer to the operation of putting a value into a variable as "assignment", and, with only specialized exception, this is done right-to-left. I'm certain that others have made this same mistake; in fact, I count myself in good company, since the authors of the lassign TIP 57
set {x y} [LocateFeature $featureID]or
mset {x y} [LocateFeature $featureID]So, you see, when TIP #57
## Using [scan] set r 80 set g 80 set b 80 scan $rgb #%2x%2x%2x r g b set resultRgb [list $r $g $b] ## Using [regexp] regexp {$#(..)?(..)?(..)?^} $rgb r g b if {! [llength $r]} {set r 80} if {! [llength $g]} {set g 80} if {! [llength $b]} {set b 80} set resultRgb [list $r $g $b]As you can see, the idioms required are different in each case. If, as you're developing code, you start with the scan approach, then decide you need to support something more sophisticated (eg, you want to have decimal, octal, or hex numbers), then you need to remember to change not just the parsing, but the method of assigning defaults as well.This also demonstrates again that providing a default value (eg, {}) when no value is provided really ought to be defined by the application and not the operation. The method of using defaults with scan is more straightforward (and amenable to using lassign or [lscan]) than the method with regexp.The solution that I proposed was to make applying defaults similar to the way that defaults are handled with proc: {var dflt}. In fact, I would go a step farther and suggest that this idiom should be available to all Tcl operations that assign values to variables (including scan and regexp). But, I think that this is unlikely to occur, and is beyond the scope of what I was discussing.The real point of my original posting was to demonstrate the utility, flexibility, power, and readability or using this idiom. I think it's a shame to limit that idiom to proc. The most general application of it is to use it for assignment, which is what I showed.slebetman: I agree with the defaults mechanism. Especially since we're so used to using it in proc. I wish we have it in all commands that assigns values to multiple variables:
lassign $foo {a 0} {b {}} {c none} scan $rgb #%2x%2x%2x {r 80} {g 80} {b 80} regexp {$#(..)?(..)?(..)?^} $rgb {r 80} {g 80} {b 80} foreach {x {y 0} {z 100}} $argv {..}I think such commands should check if the variable it is assigning to is a pair of words of which the second word is the default value. Assigning an empty string have always seemed to me too much like the hackish NULL value trick in C (the number of times I had to restructure apps because the customer insisted that zero is a valid value and should not signify undefined...).The only downside I can think of is that this breaks apps with spaces in variable names. But then again, most of us are used to not writing spaces in variable names and we are used to this syntax in proc.BTW, I also think lassign is a bad name for this operation. It makes much more sense if we instead use the name lassign to mean assign "things" to a list which fits your syntax proposal. My personal preference is still unlist (when we finally get 8.5 I'll be doing an interp alias {} unlist {} lassign). lscan doesn't sound right to me but lsplit sounds just right for splitting a list into individual variables.DKF: The name comes from TclX. Choosing a different name or argument syntax to that very well known piece of code is not worth it; just gratuitous incompatability.
fredderic: I really don't see what all the fuss is about. lassign is fine just the way it is, lset is already taken, anything-scan sounds like it does a heck of a lot more than just assigning words to variables, and the concept of proc-like default values just makes me shudder... Even in the definition of a proc! ;)Something I would like to see, is an lrassign that does the left-to-right thing, and maybe some variant or option to lassign that takes a second list of default values:
lassign-with-defs defaultsList valuesList ?variable ...?where the defaults list would be empty-string-extended to the number of variables given (any extra defaults would simply be ignored), the values list wouldn't (any extra values would be returned as per usual), so you'd end up with:
lassign-with-defs {1 2 3} {a {}} w x y zbeing the equivalent of:
set w a ;# from values list set x {} ;# also from values list set y 3 ;# 3rd default value carried through set z {} ;# empty-string expanded defaults # with both arguments consumed, and empty string is returnedThe old filling-with-empty-strings lassign behaviour would thus be achieved by simply giving it an empty default values list, and the whole thing would be absolutely fabulous. ;)Of course, the catch is that if you simply take away the filling-with-empty-strings behaviour from lassign, then the defaults capability is created by simply doing two lassigns. A little wasteful, perhaps (possibly problematic if variable write traces are involved), but still better than most of the alternatives. (Perhaps a third argument to lrepeat would fulfill the empty-string-filling requirement by accepting an initial list, and repeatedly appending the specified item until the list contains at least count words? I can imagine several occasions where that could be handy.)
Ed Hume: I think the syntax of lassign is not as useful as having the value list and the variable name list being of similar structure:
vset {value1 value2 value3 ...} {name1 name2 name3 ...}I have provided an lset command since Tcl 7.6 in my toolset which was renamed to vset with Tcl 8.4. Having both the names and values as vectors allows you to easily pass both to other procedures without resorting to a variable number of arguments. It is a common idiom to assign each row of table data to a list of column names and work with it:
foreach row $rows { vset $row $cols # now each column name is defined with the data of the table row # ... }A second significant advantage of this syntax is that the structure of the names and the values are not limited to vectors. The vset command is actually a simplified case of the rset command which does a recursive set of nested data structures:
rset {1 2 3} {a b c} # $a is 1, $b is 2, ... rset {{1.1 1.2 1.3} 2 {3.1 3.2}} {{a b c} d {e f}} # $a is 1.1, $b is 1.2, $f is 3.2,....The syntax of vset and rset lend themselves to providing an optional third argument to provide default values in the case where empty values are not desired. So this is a cleaner implementation of frederic's lassign-with-defaults - the defaults values can have the usual empty string default.Now that Tcl has the expansion operator, the difference between lassign and vset is not as important as it was, but I do think vset is a lot more powerful.DKF: Ultimately, we went for the option that we did because that was what TclX used. However, a side-benefit is that it also makes compiling the command to bytecode much easier than it would have been with vset. (Command compilers are rather tricky to write when they need to parse apart arguments.)
Script Implementation editBoth the built-in lassign and the TclX lassign are faster than the scripted implementations presented below.KPV: For those who want to use [lassign] before Tcl 8.5, and without getting TclX, here's a tcl-only version of lassign:
if {[namespace which lassign] eq {}} { proc lassign {values args} { set vlen [llength $values] set alen [llength $args] # Make lists equal length for {set i $vlen} {$i < $alen} {incr i} { lappend values {} } uplevel 1 [list foreach $args $values break] return [lrange $values $alen end] } }jcw: Couldn't resist rewriting in a style I prefer. Chaq'un son gout - a matter of taste - of course:
if {[info procs lassign] eq {}} { proc lassign {values args} { while {[llength $values] < [llength $args]} { lappend values {} } uplevel 1 [list foreach $args $values break] lrange $values [llength $args] end } }KPV: But from an efficiency point of view, you're calling llength way too many times--every iteration through the while loop does two unnecessary calls. How about this version -- your style, but more efficient:
if {[namespace which lassign] eq {}} { proc lassign {values args} { set alen [llength $args] set vlen [llength $values] while {[incr vlen] <= $alen} { lappend values {} } uplevel 1 [list foreach $args $values break] lrange $values $alen end } }jcw interjects: Keith... are you sure llength is slower? (be sure to test inside a proc body)kpv continues: It must be my assembler/C background but I see those function calls, especially the one returning a constant value and wince. But you're correct, calling llength is no slower than accessing a variable. It guess the byte compiler is optimizing out the actual call.DKF: llength is indeed bytecoded.sbron: I see no reason to massage the values list at all. foreach will do exactly the same thing even if the values list is shorter than the args list. I.e. this should be all that's needed:
if {[namespace which lassign] eq {}} { proc lassign {values args} { uplevel 1 [list foreach $args $values break] lrange $values [llength $args] end } }RS: Yup - that's the minimality I like :^)This version does not work as described in the documentation for lassign for this case:
% lassign [list] a b c % set a can't read "a": no such variablesbron: You are right, I just noticed that myself too. Improved version:
if {[namespace which lassign] eq {}} { proc lassign {values args} { uplevel 1 [list foreach $args [linsert $values end {}] break] lrange $values [llength $args] end } }AMG: I prefer to use catch to check for a command's existence. Not only will catch check if the command exists, but it can also check if it supports an ensemble subcommand, an option, or some syntax. Plus it works with interp alias, a clear advantage over info commands.
if {[catch {lassign {}}]} { proc lassign {list args} { uplevel 1 [list foreach $args [concat $list {{}}] break] lrange $list [llength $args] end } }
Open Questions editJMN: tclX doesn't seem to use a separate namespace for its commands so if we do a 'package require tclX' in Tcl 8.5+, which version of a command such as lassign will end up being used?
% lassign wrong # args: should be "lassign list ?varName ...?" % package require Tclx 8.4 % lassign wrong # args: lassign list varname ?varname..?It would seem that Tcl's lassign is replaced with TclX's.AM Most definitely, Tcl is very liberal in that respect. You can replace any procedure or compiled command by your own version. That is one reason you should use namespaces. But I think the origin of TclX predates namespaces. | http://wiki.tcl.tk/1530 | CC-MAIN-2017-22 | refinedweb | 4,128 | 65.76 |
Angular 2 [hidden] is a special case binding to hidden property.
It is closest cousin of ng-show and ng-hide.
It is more powerful to bind any property of elements. Both the ng-show and ng-hide are used to manage the visibility of elements using ng-hide css class. It is also set the display property “display:none”.
Stayed Informed - Angular 2 @Inputs
All the above features are supported in Angular 2 but added some extra feature like animations etc.
Syntax:-
<div [hidden]="!active"> Hello, this is active area! </div>
Note: - Don't use hidden attribute with Angular 2 to show/hide elements.
Question: - Don't use hidden attribute with Angular 2. Here is why?
The hidden attribute is used to hide elements. Browsers are not supposed to display elements that have the hidden attribute specified. Browsers attach "display: none" styles to elements with hidden attribute.
Example,
import { Component } from 'angular2/core'; @Component({ selector: 'demo', templateUrl: 'app/component.html' }) export class MainComponent { Ishide: true; }
<div [hidden]="Ishide"> Hey, I’m using hidden attribute. </div>
Works great but some time its override hidden attribute with some css and that time behave wrong!..
For example,
Be sure to don't have a display css rule on your <p> tags who override hidden behaviour like i.e.
p { display: inline-block !important; }
The above hidden html attributes acts like display: none;
Stayed Informed - Angular 4 vs. Angular 2
I hope you are enjoying with this post! Please share with you friends. Thank you so much!
You Might Also Like | http://www.code-sample.com/2016/04/angular-2-hidden-property.html | CC-MAIN-2017-39 | refinedweb | 258 | 61.33 |
I am trying to write a check to determine whether a number is pentagonal or not. The pentagonal numbers are numbers generated by the formula:
Pn=n(3n−1)/2
1, 5, 12, 22, 35, 51, 70, 92, 117, 145, ...
from math import sqrt
def is_pent(n):
ans = any((x*((3*x)-1))/2 == n for x in range(int(sqrt(n))))
return ans
According to Wikipedia, to test whether a positive integer
x is a pentagonal number you can check that
((sqrt(24*x) + 1) + 1)//6 is a natural number. Something like this should work for integers that aren't very big:
from math import sqrt def is_pentagonal(n): k = (sqrt(24*n+1)+1)/6 return k.is_integer() | https://codedump.io/share/LiPUSxmuaq/1/python---is-pentagonal-number-check | CC-MAIN-2016-50 | refinedweb | 121 | 55.27 |
Learn how to get started with image processing on your Raspberry Pi 3!
This guide will get you all set up for Python-based image processing on your Raspberry Pi 3! You can also use this guide with other hardware if you apply some slight tweaks (e.g. pick another architecture when downloading software). You should also be familiar with basic usage of your system's terminal. Let's get started!
sudo apt-get update
sudo apt-get dist-upgrade
sudo raspi-update
Pylon contains all the software we need for interacting with Basler cameras. Builds are provided for multiple platforms.
INSTALLfile. Do not attempt to run the pylon viewer as it is not bundled with ARM releases
Samples/Grabdirectory and execute
make, then run
./Grab, you should see some text scrolling with information about pictures being grabbed
In this step we'll set up Python 3 and the OpenCV image processing library. Just follow the instructions over here.
The only missing part is connecting Python to your camera now. PyPylon takes care of this task.
cvvirtualenv you created while installing OpenCV
python --versionfrom within your virtualenv
..cp34-cp34m-linux_armv7l.whl)
whlfile with pip via
pip3 install *path-to-whl*
pythonand check that running
import pypylon.pylondoes not yield any errors
Done! You can either try out our example projects now or create some cool stuff of your own. Have fun!
Report comment | https://imaginghub.com/projects/100-from-zero-to-image | CC-MAIN-2020-05 | refinedweb | 233 | 66.94 |
explain_lchown_or_die - change ownership of a file and report errors
#include <libexplain/lchown.h> void explain_lchown_or_die(const char *pathname, int owner, int group);
The explain_lchown_or_die function is used to call the lchown(2) system call. On failure an explanation will be printed to stderr, obtained from explain_lchown(3), and then the process terminates by calling exit(EXIT_FAILURE). This function is intended to be used in a fashion similar to the following example: explain_lchown_or_die(pathname, owner, group); pathname The pathname, exactly as to be passed to the lchown(2) system call. owner The owner, exactly as to be passed to the lchown(2) system call. group The group, exactly as to be passed to the lchown(2) system call. Returns: This function only returns on success. On failure, prints an explanation and exits.
lchown(2) change ownership of a file explain_lchown(3) explain lchown(2) errors exit(2) terminate the calling process
libexplain version 0.19 Copyright (C) 2008 Peter Miller explain_lchown_or_die(3) | http://huge-man-linux.net/man3/explain_lchown_or_die.html | CC-MAIN-2017-13 | refinedweb | 161 | 56.76 |
23 May 2012 04:16 [Source: ICIS news]
SINGAPORE (ICIS)--Global road vehicle tyre production will hit about 2bn in 2020, up from around 1.2bn/year currently, spurred by strong demand growth from emerging markets, a senior executive at French tyre maker Michelin said on Tuesday.
The number of tyres produced for trucks and buses will grow to 200m in 2020, up from about 120m currently, said Michel Rollier, managing chairman and managing general partner at the Michelin group.
This increase in tyre production will need to develop in tandem with concerns over environmental protection and sustainability, Rollier said, speaking at the World Rubber Summit 2012 which runs from 22-24 May in ?xml:namespace>
"Changing customer values are making them more aware of their actions and its impact on the environment... this makes them more demanding," he said.
"Sustainability and preservation is not a topic to discuss about, it’s a necessity," Rollier said.
"We have to also keep in mind that 18m tonnes of tires are being discarded each year... we have to believe in recycling... as these can be collected as raw materials," Roll | http://www.icis.com/Articles/2012/05/23/9562520/global-road-vehicle-tyre-output-to-hit-2bn-in-2020.html | CC-MAIN-2014-52 | refinedweb | 188 | 61.56 |
-
=> in def vs. val
hi,
am i the only one who finds it hard to learn and grok the ways to
specify functions (val) with or without types vs. methods (def)? i'm
totally just cargo-cult pattern-match programming when it comes to
this. any pointers to cheat-sheets / exhaustive yet clear and concise
summaries would be appreciated. (yes, i've been trying to grok the
language spec about this as well as googling around.)
thanks.
Re: => in def vs. val
In general, in Java you write
ReturnType myMethodName(ParameterType1 p1, ParameterType2 p2, ...)
whereas in Scala you write
def myMethodName(p1: ParameterType1, p2: ParameterType2, ...): ReturnType
So far so good. The type of the *equivalent function* is
(ParameterType1, ParameterType2, ...) => ReturnType
and if you want to specify a particular function of that type, then you specify the parameter names and give a code block:
(p1: ParameterType1, p2: ParameterType2, ...) => {
/* code */
myReturnValue // Should have type ReturnType
}
Now, you can easily convert a method into a function:
myObject.myMethodName _
So, putting this all together
object UntestedExample {
def isPositive(i: Int): Boolean = (i > 0)
val posTestA: (Int) => Boolean = (i: Int) => { (i > 0) }
val posTestB = (i: Int) => (i>0)
val posTestC = isPositive _
def thresholdFunctionGenerator(i:Int): (Int) => Boolean = {
(j: Int) => (i > j)
}
val posTestD = thresholdFunctionGenerator(0)
val threshFuncFuncA = (i:Int) => { (j: Int) => (j>i) }
val threshFuncFuncB = thresholdFunctionGenerator _
val posTestE = threshFuncFuncA(0)
val posTestF = threshFuncFuncB(0)
}
Here I've defined a method isPositive and six equivalent functions (stored in vals). Hopefully you can follow how each one is being created. (Note that I haven't always used either the most pedantically long or the most concise version possible.)
Now, in addition to the story I've already told, there are three wrinkles.
(1) Methods with multiple parameter lists.
You can write these in Scala:
def multiMethodThreshold(i: Int)(j: Int): Boolean = (j>i)
When you call them, you call them with multiple parameters. However, you can break in at any point and convert the remainder to a function:
val posTestG = multiMethodThreshold(0) _
Or, you can convert the whole thing to a function that returns a function that returns the result:
val threshFuncFuncC: (Int) => ( (Int) => Boolean) = multiMethodThreshold _
(2) Methods and functions with no parameters vs. by-name parameters.
You can write a method that takes a function without any parameters at all:
def check(value: => Int) = (value > 0)
This is called a "by-name parameter", and basically says that instead of passing the actual value, if you pass a variable, it will be passed by reference and looked at every time you call the method (functions of the type () => stuff work too, typically). This construct can be used in very many places, but not everywhere.
Or, you can write a method that takes a function with an empty parameter list:
def check(value: () => Int) = (value() > 0)
Here, you *must* pass a function, and you call it by using (). But otherwise it works a lot like the by-name parameter version, and it can be used everywhere.
(3) Partial functions.
These are most often used where you need a case statement. This is a little beyond the scope of what I want to get into right now, but for now just keep in mind that you can write your very own methods that you use like so:
myObject.myMethod {
case this:ThatType => stuff
case _ => differentStuff
}
by using a PartialFunction[TypeToPassIntoCase,TypeReturnedFromCase]. (The code block above is actually the definition of such a partial function.)
I hope this helps somewhat.
--Rex
On Tue, Jan 26, 2010 at 9:03 PM, Raoul Duke <raould [at] gmail [dot] com> wrote:
Re: => in def vs. val
cf. Rex's long answer, some of which works for me, and malheursement
some of which didn't, at least as far as i understood it, but it all
helped me start to get more of a clue. here's another way i'd answer
what i was wondering about. pretty please offer corrections to my
cluelessness here below.
A. Things To Be Aware Of wrt Your Previous Mental Models.
0) Methods and functions are similar, but of course not exactly the
same beasts in Scala (or, well, most places). You can make one "point
to" an implementation of the other, so you can sorta convert them back
and forth if need be, tho I'm not sure how scopes might come in to
play.
1) Scala's type inference only works for the return type at best. You
must specify argument types.
a) So the type spec might appear to oddly be only "half" vs. if you
are thinking Java or Haskell (since the former has no type inference,
and the latter since it lacks OO has "complete" type inference).
b) One way of specifying functions requires that you also specify the
return type.
2) When defining a method vs. a function, the delimiters used to
separate Name, Arguments, Result, and Body are kinda different. This
might have something to do with needing the syntax to support
anonymous functions (lambdas)? In particular, be aware of how ":"
changes the syntax for a function.
B. Examples, Trying To Work Our Way Up From Simple To More Complex.
def F() = 4
val F = () => 4
val F:()=>Int = () => 4
def F(x:Int) = x
val F = (x:Int) => x
val F:(Int)=>Int = (x) => x
def F(x:Int, y:Int) = x+y
val F = (x:Int, y:Int) => x+y
val F:(Int,Int)=>Int = (x,y) => x+y
val Ff = (x:Int) => x
def Df(x:Int) = Ff(x)
def Df(x:Int) = x
val Ff = (x:Int) => Df(x)
sincerely.
Re: => in def vs. val
On Tue, Jan 26, 2010 at 7:59 PM, Rex Kerr wrote:
> I hope this helps somewhat.
i bet it will be, thank you for the detailed response! i am parsing! | http://www.scala-lang.org/node/5061 | CC-MAIN-2013-20 | refinedweb | 975 | 61.87 |
Class that implements helper functions for the pure virtual PHX::Evaluator class. More...
#include <Phalanx_Evaluator_WithBaseImpl.hpp>
Class that implements helper functions for the pure virtual PHX::Evaluator class.
This class implements code that would essentially be repeated in each Evaluator class, making it quicker for developers to add new evaluators. All field evaluators should inherit from this class if possible instead of the base class so they don't have to code the same boilerplate in all evaluators, but this is not mandatory.
Evaluate all fields that the provider supplies.
Input:
Implements PHX::Evaluator< Traits >.
This routine is called after each residual/Jacobian fill.
This routine is called ONCE on the provider after the fill loop over cells is completed. This allows us to evaluate any post fill data. An example is to print out some statistics such as the maximum grid peclet number in a cell.
Implements PHX::Evaluator< Traits >.
Allows providers to grab pointers to data arrays.
Called once all providers are registered with the manager.
Once the field manager has allocated all data arrays, this method passes the field manager to the providers to allow each provider to grab and store pointers to the field data arrays. Grabbing the data arrays from the varible manager during an actual call to evaluateFields call is too slow due to the map lookup and FieldTag comparison (which uses a string compare). So lookups on field data are only allowed during this setup phase.
Implements PHX::Evaluator< Traits >.
This routine is called before each residual/Jacobian fill.
This routine is called ONCE on the provider before the fill loop over cells is started. This allows us to reset global objects between each fill. An example is to reset a provider that monitors the maximum grid peclet number in a cell. This call would zero out the maximum for a new fill.
Implements PHX::Evaluator< Traits >. | http://trilinos.sandia.gov/packages/docs/r10.4/packages/phalanx/doc/html/classPHX_1_1EvaluatorWithBaseImpl.html | CC-MAIN-2014-35 | refinedweb | 314 | 58.08 |
How Scroll Views Work
Scroll views act as the central coordinator for the Application Kit’s scrolling machinery, managing instances of scrollers, rulers, and clipping views. A scroll view changes the visible portion of the displayed document in response to user-initiated actions or to programmatic requests by the application. This article describes the various components of a scroll view and how the scrolling mechanism works.
Components of a Scroll View
Cocoa provides a suite of classes that allow applications to scroll the contents of a view. Instances of the
NSScrollView class act as the container for the views that work together to provide the scrolling mechanism. Figure 1 shows the possible components of a scroll view.
The Document View
NSScrollView instances provide scrolling services to its document view. This is the only view that an application must provide a scroll view. The document view is responsible for creating and managing the content scrolled by a scroll view.
The Content View
NSScrollView objects enclose the document view within an instance of
NSClipView that is referred to as the content view. The content view is responsible for managing the position of the document view, clipping the document view to the content view's frame, and handling the details of scrolling in an efficient manner. The content view scrolls the document view by altering its bounds rectangle, which determines where the document view’s frame lies. You don't normally interact with the
NSClipView class directly; it is provided primarily as the scrolling machinery for the
NSScrollView class.
Scroll Bars
The
NSScroller class provides controls that allow the user to scroll the contents of the document view. Scroll views can have a horizontal scroller, a vertical scroller, both, or none. If the scroll view is configured with scrollers, the
NSScrollView class automatically creates and manages the appropriate control objects. An application can customize these controls as required. See “How Scrollers Interact with Scroll Views” for more information.
Rulers
Scroll views also support optional horizontal and vertical rulers, instances of the
NSRulerView class or a custom subclass. To allow customization, rulers support accessory views provided by the application. A scroll view's rulers don’t automatically establish a relationship with the document view; it is the responsibility of the application to set the document view as the ruler's client view and to reflect cursor position and other status updates. See Ruler and Paragraph Style Programming Topics for more information.
How Scrolling Works
A scroll view's document view is positioned by the content view, which sets its bounds rectangle in such a way that the document view’s frame moves relative to it. The action sequence between the scrollers and the corresponding scroll view, and the manner in which scrolling is performed, involve a bit more detail than this.
Scrolling typically occurs in response to a user clicking a scroller or dragging the scroll knob, which sends the
NSScrollView instance a private action message telling it to scroll based on the scroller's state. This process is described in “How Scrollers Interact with Scroll Views.” If you plan to implement your own kind of scroller object, you should read that section.
The
NSClipView class provides low-level scrolling support through the
scrollToPoint: method. This method translates the origin of the content view’s bounds rectangle and optimizes redisplay by copying as much of the rendered document view as remains visible, only asking the document view to draw newly exposed regions. This usually improves scrolling performance but may not always be appropriate. You can turn this behavior off using the
NSClipView method
setCopiesOnScroll: passing
NO as the parameter. If you do leave copy-on-scroll active, be sure to scroll the document view programmatically using the
NSView method
scrollPoint: method rather than
translateOriginToPoint:.
Whether the document view scrolls explicitly in response to a user action or an
NSClipView message, or implicitly through a
setFrame: or other such message, the content view monitors it closely. Whenever the document view’s frame or bounds rectangle changes, it informs the enclosing scroll view of the change with a
reflectScrolledClipView: message. This method updates the
NSScroller objects to reflect the position and size of the visible portion of the document view.
How Scrollers Interact with Scroll Views
NSScroller is a public class primarily for developers who decide not to use an instance of
NSScrollView but want to present a consistent user interface. Its use outside of interaction with scroll views is discouraged, except in cases where the porting of an existing application is more straightforward.
Configuring an
NSScroller instance for use with a custom container view class (or a completely different kind of target) involves establishing a target-action relationship as defined by
NSControl. In the case of the scroll view, the target object is the content view. The target object is responsible for implementing the action method to respond to the scroller, and also for updating the scrollers in response to changes in target.
As the scroller tracks the mouse, it sends an action message to its target object, passing itself as the parameter. The target object then determines the direction and scale of the appropriate scrolling action. It does this by sending the scroller a
hitPart message. The
hitPart method returns a part code that indicates where the user clicked in the scroller. Table 1 shows the possible codes returned by the
hitPart method.
The target object tracks the size and position of its document view and updates the scroller to indicate the current position and visible proportion of the document view by sending the appropriate scrollers a
setFloatValue:knobProportion: message, passing the current scroll location. The knob proportion parameter is a floating-point value between 0 and 1 that specifies how large the knob in the scroller should appear.
NSClipView overrides most of the
NSView
setBounds... and
setFrame... methods to perform this updating. | http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/NSScrollViewGuide/Articles/Basics.html | CC-MAIN-2013-20 | refinedweb | 979 | 51.58 |
Middlewares with react context and hooks
Vanderlei Alves da Silva
・4 min read
Continuing the idea explored in the previous article of having a global state management using pure react (with react context and hooks), we’re going to explore now how to take advantage of the middlewares concept, implementing for that a loger and localStorage middleware to our todo app, check here the live demo and here the source code
About middlewares
The term can slightly differ from each depending on the middleware type (Database Middleware, Integration Middleware, Application Middleware, Object Middleware, Remote Procedure Call (RPC) Middleware, Message Oriented Middleware ...) but essentially they have the idea of a composable peace of code running in the middle of to distincts processes improving their communication, and by process we could use more specific terms according to the scenario that we are talking about.
In the web development niche this term is spreedly used in server side technologies such as Laravel, ExpressJS, nestJS among others as:
A composition of functions that are called, in a given order, before the route handler and that usually each one of them have access to the request information and a reference to the next function in this chain
This idea was taken by the front-end fellows, mainly applied by the state management libraries: redux, mobx, vuex (the last even though with a different nomenclature “plugin” the idea is the same), and what they all do is to provide a way of running some code between dispatching an action, and the moment it changes the application state.
Of course this concept can be used in other scenarios, this article explores its usage attached to the router change in angular, getting closer to the above mentioned server-side ones. But for now we’re going to explore the first one.
Show me the code
import { initial, final } from './log'; import localStorage from './localStorage'; export default ({ state, action, handler }) => { const chain = [initial, handler, localStorage, final]; return chain.reduce((st, fn) => fn(st, action), state); };
That’s all what matters, we need a function to create a middleware chain and execute all them in a given order and of course call our handler (the reducer function called by a given action in our application).
const chain = [initial, handler, localStorage, final]
Here we define the middlewares that will be called and in which order they will, the ones that comes before handler are the pre-middlewares (you put here all middlewares that you want to run something before the state have changed) and the others the post-middlewares (the ones that execute something with the new state).
The middleware function signature follows the same pattern of the reducers:
(state, action) => newState
As an example here is the initial log middlewares:
const initial = (state, action) => { console.log(action); console.log(state); return state; };
The middleware just log the initial state (before the state have being change by the reducer) on the console.
Here we have a more interesting one:
import store from 'store2'; export default state => { store.set(state.storeId, state); return state; };
This middleware saves the current state on the local storage, I’m using for this a small library store2 just to make sure retrocompatibility with old browsers and also avoiding working with try catch statements.
I have on the app state an storeId property with the name of the key that will be saved on the local storage, so basically in this call:
store.set(state.storeId, state);
I store in the given key the given state. If you check the app again, play around and refresh the page, the information will still be there.
And lastly we have:
return chain.reduce((st, fn) => fn(st, action), state);
We use the reduce array method to iterate over each item of the chain getting the result of the previous and passing to the next item.
There it is
We have now returned to the basics and explored how the main state management libraries conceptually work with middlewares, giving us the same results with less dependencies and less complexity. We now understand what happens, instead of just blindly using them.
What do we got from it!? A better reasoning of when to use these state libraries.
What do we go from now!? More hooks on the way, check here the new custom hooks from react-router-v5.1 and see you soon. ;)
References
Advice for Developers in the Early Stage of their Career
I have been asked this a couple of times and will love to hear from others too....
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/vanderleisilva/middlewares-with-react-context-and-hooks-2gm1 | CC-MAIN-2019-47 | refinedweb | 760 | 52.43 |
Important: Please read the Qt Code of Conduct -
Clang CodeModel
why clang codemodel still doesn't work in 3.5.x QtC builds?
is it ok because of experimental build or something wrong in a settings?
I mean , unlike of builtin model, clang model doesn't allow to expand available members/fields/namespaces/etc by dot/arrow (ctrl+space doesn't work for this model as well)?
- SGaist Lifetime Qt Champion last edited by
Hi,
If you are having trouble with the clang-code-model, you should check with the Qt Creator folks on the Qt Creator mailing list You'll find there Qt Creator's developers/maintainers (this forum is more user oriented) | https://forum.qt.io/topic/62206/clang-codemodel | CC-MAIN-2021-43 | refinedweb | 114 | 59.84 |
Vahid Mirjalili, Data Mining Researcher
2. Methodology Description
Clustering algorithms has a wide range of applications. Several methods are proposed so far, and each method has drawbacks, for example in k-means clustering the knowledge of number of clusters is crucial, yet it doesn't provide a robust solution to datasets with clusters that have different sizes or densities.
In this article, an implementation of a novel clustering algorithm is provided, and applied to a synthetic dataset that contains multiple topologies. See the reference publication by Alex Rodriguez and Alessandro Laio.
The most important feature of this algorithm is that it can automatically find the number of clusters.
In the following sections, first we provide a clustsred dataset, then illustrate how the algorithm works. At the end, we conclude with comparison of this algorithm with other clustering algorithm such as k-means and DBScan.
A syntethic dataset is provided that can be downloded here. This dataset contains 2000 points of 2 features and a ground truth cluster. The dataset is generated by random normal and uniform numbers. Several noise points were also added. Total if 6 clusters exist.
%load_ext watermark %watermark -a 'Vahid Mirjalili' -d -p scikit-learn,numpy,matplotlib,pyprind,prettytable -v
Vahid Mirjalili 18/10/2014 CPython 2.7.3 IPython 2.0.0 scikit-learn 0.15.2 numpy 1.9.0 matplotlib 1.4.0 pyprind 2.6.1 prettytable 0.7.2
## Implementation of Density Peak Clustering ### Vahid Mirjalili import numpy as np import urllib import csv # download the data #url = '' #content = urllib.urlopen(url) #content = content.read() # save the datafile locally #with open('data/synthetic_data.csv', 'wb') as out: # out.write(content) # read data into numpy array d = np.loadtxt("../data/synthetic_data.csv", delimiter=',') y = d[:,2] print(np.unique(y))
[ 0. 1. 2. 3. 4. 5. 6.]
In this dataset, 0 represents noise points, and others are considered as ground-truth cluster IDs. Next, we visulaize the dataset as below:
from matplotlib import pyplot as plt %matplotlib inline plt.figure(figsize=(10,8)) for clustId,color in zip(range(0,7),('grey', 'blue', 'red', 'green', 'purple', 'orange', 'brown')): plt.scatter(x=d[:,0][d[:,2] == clustId], y=d[:,1][d[:,2] == clustId], color=color, marker='o', alpha=0.6 ) plt.title('Clustered Dataset',()
import pyprind def euclidean_dist(p, q): return (np.sqrt((p[0]-q[0])**2 + (p[1]-q[1])**2)) def cal_density(d, dc=0.1): n = d.shape[0] den_arr = np.zeros(n, dtype=np.int) for i in range(n): for j in range(i+1, n): if euclidean_dist(d[i,:], d[j,:]) < dc: den_arr[i] += 1 den_arr[j] += 1 return (den_arr) den_arr = cal_density(d[:,0:2], 0.5) print(max(den_arr))
579
def cal_minDist2Peaks(d, den): n = d.shape[0] mdist2peaks = np.repeat(999, n) max_pdist = 0 # to store the maximum pairwise distance for i in range(n): mdist_i = mdist2peaks[i] for j in range(i+1, n): dist_ij = euclidean_dist(d[i,0:2], d[j,0:2]) max_pdist = max(max_pdist, dist_ij) if den_arr[i] < den_arr[j]: mdist_i = min(mdist_i, dist_ij) elif den_arr[j] <= den_arr[i]: mdist2peaks[j] = min(mdist2peaks[j], dist_ij) mdist2peaks[i] = mdist_i # Update the value for the point with highest density max_den_points = np.argwhere(mdist2peaks == 999) print(max_den_points) mdist2peaks[max_den_points] = max_pdist return (mdist2peaks)
mdist2peaks = cal_minDist2Peaks(d, den_arr) def plot_decisionGraph(den_arr, mdist2peaks, thresh=None): plt.figure(figsize=(10,8)) if thresh is not None: centroids = np.argwhere((mdist2peaks > thresh) & (den_arr>1)).flatten() noncenter_points = np.argwhere((mdist2peaks < thresh) ) else: centroids = None noncenter_points = np.arange(den_arr.shape[0]) plt.scatter(x=den_arr[noncenter_points], y=mdist2peaks[noncenter_points], color='red', marker='o', alpha=0.5, s=50 ) if thresh is not None: plt.scatter(x=den_arr[centroids], y=mdist2peaks[centroids], color='blue', marker='o', alpha=0.6, s=140 ) plt.title('Decision Graph', size=20) plt.xlabel(r'$\rho$', size=25) plt.ylabel(r'$\delta$', size=25) plt.ylim(ymin=min(mdist2peaks-0.5), ymax=max(mdist2peaks+0.5)) plt.tick_params(axis='both', which='major', labelsize=18) plt.show() plot_decisionGraph(den_arr, mdist2peaks, 1.7)
[[954]]
As shown in the previous plot, cluster centroids stand out since they have larger distances to density peaks. These points are shown in blue, and can be picked easily:
## points with highest density and distance to other high-density points thresh = 1.0 centroids = np.argwhere((mdist2peaks > thresh) & (den_arr>1)).flatten() print(centroids.reshape(1, centroids.shape[0]))
[[ 266 336 482 734 954 1785]]
Further, we show the cluster centroids and non-centroid points together in figure below. The centroids are shown in purple stars. Surprisingly, the algorithm picks one point as centroid from each high-density region. The centroids have large density and large distance to other density-peaks.
%matplotlib inline plt.figure(figsize=(10,8)) d_centers = d[centroids,:] for clustId,color in zip(range(0,7),('grey', 'blue', 'red', 'green', 'purple', 'orange', 'brown')): plt.scatter(x=d[:,0][d[:,2] == clustId], y=d[:,1][d[:,2] == clustId], color='grey', marker='o', alpha=0.4 ) # plot the cluster centroids plt.scatter(x=d_centers[:,0][d_centers[:,2] == clustId], y=d_centers[:,1][d_centers[:,2] == clustId], color='purple', marker='*', s=400, alpha=1.0 ) plt.title('Cluster Centroids', size=20) plt.xlabel('$x_1$', size=25) plt.ylabel('$x_2$', size=25) plt.tick_params(axis='both', which='major', labelsize=18) plt.xlim(xmin=0, xmax=10) plt.ylim(ymin=0, ymax=10) plt.show()
In this figure, cluster centers are shown in dark triangle, and the remaining points are shown in grey, since we don't know their cluster ID yet.
After the cluster centroids are found, the remaining points should be assigned to their corresponding centroid. A naive way is to assign each point simply to the nearest centroid. However, this method is prone to large errors if data has some weird underlying topology.
As a result, the least error prone method is to assign points to the cluster of their nearest neigbour, as given in algorithm below:
Algorithm:
1. Assign centroids to a unique cluster, and remove them from the set
Repeat until all the points are assigned to a cluster:
2. Find the maximum density in the remaining set of points, and remove it from the set
3. Assign it to the same cluster as its nearest neighbor in the set of already assigned points
def assign_cluster(df, den_arr, centroids): """ Assign points to clusters """ nsize = den_arr.shape[0] #print (nsize) cmemb = np.ndarray(shape=(nsize,2), dtype='int') cmemb[:,:] = -1 ncm = 0 for i,cix in enumerate(centroids): cmemb[i,0] = cix # centroid index cmemb[i,1] = i # cluster index ncm += 1 da = np.delete(den_arr, centroids) inxsort = np.argsort(da) for i in range(da.shape[0]-1, -1, -1): ix = inxsort[i] dist = np.repeat(999.9, ncm) for j in range(ncm): dist[j] = euclidean_dist(df[ix], df[cmemb[j,0]]) #print(j, ix, cmemb[j,0], dist[j]) nearest_nieghb = np.argmin(dist) cmemb[ncm,0] = ix cmemb[ncm,1] = cmemb[nearest_nieghb, 1] ncm += 1 return(cmemb)
clust_membership = assign_cluster(d, den_arr, centroids)
plt.figure(figsize=(10,8)) for clustId,color in zip(range(0,6),('blue', 'red', 'green', 'purple', 'orange', 'brown')): cset = clust_membership[clust_membership[:,1] == clustId,0] plt.scatter(x=d[cset,0], y=d[cset,1], color=color, marker='o', alpha=0.6 ) plt.title('Clustered Dataset by Density-Peak Algorithm',()
The new clustering algorithm presented here, is capable of clustering dataset that contain
It is important to note that this method needs minimum number of parameters, which is eps for finding the density of each point. The number of clusters is found by analyzing the density peaks and minimum distances to density peaks. The application of this clustering algorithm is not limited to numeric data types, as it could also be used for text/string data types provided that a proper distance function exist. | http://vahidmirjalili.com/static/articles/densityPeak_clustering.html | CC-MAIN-2017-30 | refinedweb | 1,303 | 52.36 |
Created on 2014-01-14 00:43 by rmsr, last changed 2014-03-01 07:16 by koobs. This issue is now closed.
recvfrom_into fails to check that the supplied buffer object is big enough for the requested read and so will happily write off the end.
I will attach patches for 3.4 and 2.7, I'm not familiar with the backporting procedure to go further but all versions since 2.5 have this bug and while very highly unlikely it's technically remotely exploitable.
Quickie trigger script, crash on interpreter exit:
--------- BEGIN SEGFAULT ---------
import socket
r, w = socket.socketpair()
w.send(b'X' * 1024)
r.recvfrom_into(bytearray(), 1024)
Everything before 2.7 is already out of even security maintenance, so you've already checked off everything it will get fixed in.
New changeset 87673659d8f7 by Benjamin Peterson in branch '2.7':
complain when nbytes > buflen to fix possible buffer overflow (closes #20246)
New changeset 715fd3d8ac93 by Benjamin Peterson in branch '3.1':
complain when nbytes > buflen to fix possible buffer overflow (closes #20246)
New changeset 9c56217e5c79 by Benjamin Peterson in branch '3.2':
complain when nbytes > buflen to fix possible buffer overflow (closes #20246)
New changeset 7f176a45211f by Benjamin Peterson in branch '3.3':
merge 3.2 (#20246)
New changeset ead74e54d68f by Benjamin Peterson in branch 'default':
merge 3.3 (#20246)
New changeset 37ed85008f51 by Benjamin Peterson in branch 'default':
merge 3.3 (#20246)
One test fails on FreeBSD 9.0 and 6.4:
======================================================================
ERROR: testRecvFromIntoSmallBuffer (test.test_socket.BufferIOTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_socket.py", line 259, in _tearDown
raise exc
File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_socket.py", line 271, in clientRun
test_func()
File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_socket.py", line 4690, in _testRecvFromIntoSmallBuffer
self.serv_conn.send(MSG*2048)
BrokenPipeError: [Errno 32] Broken pipe
Perhaps the test is sending an infeasibly large message. If you remove the '*2048' does it pass? (I set up a FreeBSD 9.2 amd64 VM but all tests are passing here).
MSG*1024 passes. I did not look at this issue: Would changing the value to 1024
invalidate the test?
The send part of the test doesn't matter, since what's being tested happens before any reads. The MSG multiplier should be removed completely, since none of the other tests do that.
Patch attached.
New changeset 5c4f4db8107c by Stefan Krah in branch '3.3':
Issue #20246: Fix test failures on FreeBSD. Patch by Ryan Smith-Roberts.
New changeset 9bbc3cc8ff4c by Stefan Krah in branch 'default':
Issue #20246: Fix test failures on FreeBSD. Patch by Ryan Smith-Roberts.
New changeset b6c5a37b221f by Stefan Krah in branch '2.7':
Issue #20246: Fix test failures on FreeBSD. Patch by Ryan Smith-Roberts.
Thanks Ryan. As you say, the original segfault is also triggered with the
shortened message.
I just came across . Now I wonder why this bug was neither reported to PSRT nor get a CVE number. It's a buffer overflow...
I'm going to contact MITRE right away.
Branch status:
Vulnerable (last release prior to patch):
2.7.6
3.1.5
3.2.5
Fixed (latest release post patch):
3.3.4+
3.4
So my reading is that 2.7.7 needs to be brought forward, and source only releases of 3.1.6 and 3.2.6 should be published.
It also sounds like there's a missing trigger that automatically notifies PSRT when someone else classifies a bug as a security bug.
Confirming the fix is in the 3.3.4 tag:
And the 3.4rc1 tag:
This issue has already been assigned CVE-2014-1912
Reference:
We don't currently have the capability to set an email trigger when the type is set to security. That should be submitted as a request on the meta tracker. (It will require a new reactor, which is easy, and a tweak to the database schema, which I don't offhand remember how to deploy, but it shouldn't be hard.)
Is there an ETA for a 2.7.7 release with this fix?
I notified [email protected] and waited for the go-ahead (from Guido I think) before opening this bug. If today is the first that the PSRT is hearing about this, then the issue is broader than just the bugtracker.
Yes, your message reached PSRT on Jan 12th.
Sorry, you are right and I was wrong. :(
Your mail *was* delivered to PSRT. But it failed to reach me because I was having issues with my @python.org account. The server-side spam filter is now deactivated and I receive all mails again.
A recently posted proof of concept exploit got a lot of attention:
I suggest some Python core developer should clarify here whether people running some publically available python based web service
(Zope, Plone, Roundup, MoinMoin, or whatever) are vulnerable or not.
recvfrom_into() is hardly ever used, including in the stdlib itself.
People using third-party software should check that the software itself doesn't call this method (chances are it doesn't).
Antoine Pitrou:
> recvfrom_into() is hardly ever used, including in the stdlib itself.
Thank you for the quick clarification.
This will certainly help to calm down nervous people.
Best regards, Peter.
Can somebody backport the fixes for the test breakages to 3.1 and 3.2 please, it seems they were forgotten.
The original CVE fix includes changes to test_socket.py so I cant imagine security-only-fix policy applies.
Thanks!
New changeset c25e1442529f by Stefan Krah in branch '3.1':
Issue #20246: Fix test failures on FreeBSD. Patch by Ryan Smith-Roberts.
New changeset e82dcd700e8c by Stefan Krah in branch '3.2':
Issue #20246: Fix test failures on FreeBSD. Patch by Ryan Smith-Roberts.
Thank you Stefan | http://bugs.python.org/issue20246 | CC-MAIN-2016-22 | refinedweb | 981 | 69.79 |
Pre-lab
- Get the project 2 starter files using
git pull skeleton master
- Watch the lab 5 video.
Introduction
In this lab, you will get started on project 2. Project 2 is a solo project — no partners. Your work must all be your own. It will be long, arduous, and at times frustrating. However, we hope that you will find it a rewarding experience by the time you are done.
More than any other assignment or in the course, it is extremely important that you begin early. A significant delay in starting this project will leave you most likely unable to finish.
The project 2 spec will be released in beta form 2/17 (Wednesday), and should be effectively complete 2/19 (Friday). However, it will be possible to start the project even before its final release.
Project 2
In this project, you will build a text editor from scratch. It will support a variety of non-trivial features (scroll bars, arrow keys, cursor display, word wrap, saving to files), but you will not be building any of those features in lab today.
In this lab, you'll take a simple example
SingleLetterDisplaySimple.java and use it to build a simple barely-usable text editor with the following features:
- The text appears starting at the top left corner.
- The delete key works properly.
- The enter key works properly.
HelloWorlding
In this project, you will work with a massive library with a very complex API called JavaFX. It will be your greatest ally and most hated foe. While capable of saving you tons of work, understanding of its intricacies will only come through substantial amounts of reading and experimentation. You are likely to go down many blind alleys on this project, and that's fine and fully intentional.
When working with such libraries, there is a practice I call "Hello Worlding." Recall that the very first thing we ever did in Java in this course was write a program that says "Hello World".
public class Hello { public static void main(String[] args) { System.out.println("Hello world!"); } }
Here, the goal was to give us a foundation upon which all else could be built. At the time, there were many pieces that we did not fully understand, and we deferred a full understanding until we were ready. Much like learning sailing, martial arts, swimming, mathematics, or Dance Dance Revolution, we start with some basic actions that we understand in isolation from all else. We might learn a basic throw in Judo, not imagining ourselves in a match, but focused entirely on the action of the throw. Only later will we start trying to use a specific throw in a specific circumstance. This is the nature of abstraction.
Programming is no different. When we are learning new things, we will need to keep our learning environment as simple as possible. Just as you probably should not try to learn a new swimming kick in 10 foot waves, you should learn about the features of a new library using the simplest test programs possible.
However, we musn't go too far: one must still be in the water to truly learn to swim. Practicing the arm movements for the butterfly stroke on dry land may lead to little understanding at all. Finding the right balance between a "too-simple" and "too-complex" experimentation environment is an art that we hope you'll learn during this project.
I call this art HelloWorlding. HelloWorlding is the practice of writing the simplest reasonable module that shows that a new action is even possible.
For example, the SingleLetterDisplaySimple module described in the next section of this lab is about the simplest possible example that shows all of the strange features of JavaFX that we'll need to use to complete this lab.
Likewise, the bare-bones Editor that you'll be building in this lab is itself a HelloWorld — a stepping stone to the glorious editor you'll build by week 8. This is learning to swim in calm seas.
Examples
SingleLetterDisplaySimple
Kay Ousterhout (lead developer on this project) has created a number of example files for you to use to learn the basic actions that you'll need to succeed in using JavaFX.
Try compiling and running
SingleLetterDisplaySimple. Things to try out:
- Typing letters.
- Using the shift key to type capital letters.
- Using the up and down keys to change the font size.
If you get "package does not exist" messages, this means you installed the OpenJDK (an open source alternative to Oracle's implementation of Java). You will need to install Oracle's version of Java (see lab 1b), which will let you use JavaFX.
Once you've had a chance to play around with the demo, it's time to move on to building our editor.
Creating a Crude Editor
Head to your
proj2/editor folder. In this folder, there should be a file called
Editor.java. This is the file that we'll edit today. We'll be adding three new features to our editor:
- Task 1: The ability to show all characters typed so far (instead of just the most recent).
- Task 2: Display of text in the top left corner of the window (instead of in the middle).
- Task 3: The ability to delete characters using the backspace key.
This lab is graded on effort and you'll receive full credit even if you don't complete all 3 of these features by Friday. However, we strongly recommend that you do so if at all possible.
Depending on whether you're using the command line or IntelliJ, follow the directions below.
Command Line
Let's start by making sure you can compile and run the skeleton file. If you're using the command line, head to the editor subfolder of your proj2 directory.
$ cd proj2 $ cd editor $ javac Editor.java
This should compile your Editor. However, because Editor is part of a package, you'll need to run the code from one folder up, and using the full package name:
$ cd .. $ java editor.Editor
This should start up your program, which at the moment, doesn't do anything. Press control-C in order to quit.
IntelliJ
Let's make sure we can compile our file in IntelliJ. Open up the Editor.java file that came as part of the proj2 skeleton. Try to compile the file. Compilation should complete successfully. Don't bother running the file at this point, since it won't display anything.
Task 1: Multiple Characters
Your first goal is to make it so that your Editor is just like SingleLetterDisplaySimple, except that it displays all characters typed so far, as shown in the lab5 video. Copy and paste the relevant parts of SingleLetterDisplaySimple demo over to your Editor.java file. Now modify this code in such a way that your editor shows everything that has been typed so far. Don't worry about the delete/backspace key for now.
This will probably take quite a bit of experimentation!
Hint: You'll need to change the argument of setText.
Recommendation: If at any point you need a list-like data structure, use the java.util.LinkedList class instead of one of your Deque classes from project 1.
Task 2: Top Left
Now modify your code so that the text is displayed not in the center of the screen, but in the top left.
Task 3: Backspace
Now modify your code so that backspace properly removes the most recently typed character. Once you're done with this, you're done with the lab.
Our First Blind Alley
It turns out that the approach we've adopted so far is doomed. See the video explanation or read the text below.
The issue is that there is no easy way that we can support an important later feature of the project: Clicking. This feature is supposed to allow us to click on a certain part of the screen and move the cursor to that postiion. JavaFX can tell us the X and Y coordinates of our mouse click, which seems like it'd be plenty of information to suppose click-to-move-cursor.
However, the problem is that we're deferring all of the work of laying out the text to the setText method. In other words, if our entered text is:
I really enjoy eating delicious potatoes with the sunset laughing at me, every evening.
Our current approach lets the JavaFX figure out how to display all of these letters on the screen. If we clicked on the line right after the word "with", JavaFX would dutifully tell us the X and Y coordinate. However, we'd have no way of figuring out that this is position #48 in the text.
You'll need to come up with an alternate approach. There is a way to do this without using any new JavaFX features. However, it means you're going to need to build a text layout engine. This might be a good next step in the project.
Note: While JavaFX does support allowing you to click on a letter to get the position of this letter, in this example, the place we are clicking is not on a letter, but just on whitespace in the middle of the text.
Running the 61B Style Checker
Remember that all code submitted from this point forward.
Some of these style rules may seem arbitrary (spaces vs. tabs, exact indentation, must end with a newline, etc.). They are! However, it is likely that you'll work with teams in the future that have similarly stringent constraints. Why do they do this? Simply to establish a consistent formatting style among all developers. It is a good idea to learn how to use your tools to conform to formatting styles.
Submission
Submit Editor.java and MagicWord.java. If you're submitting before 10 PM on Wednesday, use the magic word "early". | http://sp16.datastructur.es/materials/lab/lab5/lab5.html | CC-MAIN-2020-50 | refinedweb | 1,655 | 73.17 |
NAME
getnewvnode - get a new vnode
SYNOPSIS
#include <sys/param.h> #include <sys/vnode.h> #include <sys/mount.h> int getnewvnode(const char *tag, struct mount *mp, vop_t **vops, struct vnode **vpp);
DESCRIPTION
The getnewvnode() function initializes a new vnode, assigning it the vnode operations passed in vops. The vnode is either freshly allocated, or taken from the head of the free list depending on the number of vnodes already in the system. The arguments to getnewvnode() are: tag The file system type string. This field should only be referenced for debugging or for userland utilities. mp The mount point to add the new vnode to. vops The vnode operations to assign to the new vnode. vpp Points to the new vnode upon successful completion.
RETURN VALUES
getnewvnode() returns 0 on success. There are currently no failure conditions - that do not result in a panic.
AUTHORS
This manual page was written by Chad David 〈[email protected]〉. | http://manpages.ubuntu.com/manpages/hardy/man9/getnewvnode.9.html | CC-MAIN-2014-41 | refinedweb | 158 | 59.9 |
LightSwitch has always had support for storing pictures in a database through its “Image” business type. However, often it is not feasible to store images in a database, due to size and/or accessibility. In this post I’ll show you how you can leverage Azure blob storage to store images used in your HTML client applications. This is a good architecture particularly if your application is hosted in an Azure Website already. It’s also pretty easy to do.
Setting up Azure Blob Storage
Setting up storage is easy, just follow these directions: Create an Azure Storage account
That article also explains how to access storage programmatically from .NET and how to store settings in your web.config so I encourage you to read through all of it. For the purposes of this post, I’m going to focus on the pieces needed to integrate this into your LightSwitch HTML app.
After you create & name your storage account, click “Manage Access Keys” to grab the access key you will need to supply in your connection string.
Once you have the storage account set up, you can programmatically create containers and save blobs to them from your LightSwitch .NET Server project. Let’s take a look at an example.
Setting up the Data Model
To get this to work elegantly in LightSwitch, we’re going to utilize a couple business types: Image and Web Address. The Image type will only be used to “transport” the bytes into storage for us. We’ll use the Web Address for viewing the image. We can set up the blog container so that we can address blobs directly via a URL as you will see shortly.
For this example, assume a User can have many Pictures. Here’s our data model. Notice the Picture entity has three important properties: Image, ImageUrl, and ImageGuid.
The ImageGuid is used to generate a unique Id that becomes part of the addressable URL of the image in Azure blob storage. It’s necessary so we can find the correct blob. Of course you can come up with your own unique naming, and you can even store them in different containers if you want.
Creating the Screens
When you create your screens, make sure that the ImageUrl is being used to display the picture and not the Image property. A Web Address business type also allows you to choose a Image control to display it. For instance, I’ll create a Common Screen Set for my User table and include Pictures.
Then open up the ViewUser screen and on the Pictures tab, remove the Image and instead display the ImageUrl as an Image control.
Then add a tap action on the Tile List to viewSelected on the Picture. Choose a new screen, Select View Details Screen and select Picture as the screen data. On this screen, you can display the picture as actual size using the same technique as above, but setting the ImageUrl image control to “Fit to Content” in the appearance section of the properties window. You can also set the Label position to “None”.
Finally, we’ll want to allow adding and editing pictures. On the ViewUser screen, add a button to the Command Bar and set the Tap action to addAndEditNew for the Picture. Create a new screen, add a new Add Edit Details screen and set the screen data to Picture. On the View Picture screen, add a button and set the Tap action to edit for the Picture and use the same Add Edit Picture Screen.
Custom JavaScript Control for Uploading Files
Now that our screens are in place and are displaying the ImageUrl how we like, it’s time to incorporate a custom JavaScript control for uploading the files. This utilizes the Image property which is storing the bytes for us on the client side. I’ll show you in a minute how we can intercept this on the Server tier and send it to Azure storage instead of the database, but first we need a control to get the bytes from our users.
Luckily, you don’t have to write this yourself. There’s a control that the LightSwitch team wrote a wile back that will work swimmingly with modern browsers as well as older ones that don’t support the HTML5 method of reading files. It’s part of a tutorial, but you can access the files directly and copy the code you need.
image-uploader.js
image-uploader-base64-encoder.aspx
Put them in your HTMLClient\Scripts folder. Then add a reference to the JavaScript in your default.htm file.
<script type="text/javascript" src="Scripts/image-uploader.js"></script>
Finally, add the custom control to the Add Edit Picture screen. This time, make sure you use the Image property, not ImageUrl. We will be setting the bytes of this property using the image-uploader control.
While the custom control is selected, drop down the “Write Code” button from the top of screen designer and set the code in the _render method like so:
myapp.AddEditPicture.Image_render = function (element, contentItem) { // Write code here. createImageUploader(element, contentItem, "max-width: 300px; max-height: 300px"); };
Now for the fun part. Saving to Azure blob storage from your LightSwitch Server tier.
Saving to Blob Storage from the LightSwitch Update Pipeline
That’s all we have to do with the client – now it’s time for some server coding in .NET. First, we’ll need some references to the Azure storage client libraries for .NET. You can grab these from NuGet. Right-click on your Server project and select “Manage NuGet Packages”. Install the Windows.Azure.Storage 3.1 package.
Now the trick is to intercept the data going through the LightSwitch update pipeline, taking the Image data and saving it to storage and then setting the Image data to null. From there, LightSwitch takes care of sending the rest of the data to the database.
Open the Data Designer to the Picture entity and drop down the Write Code button in order to override a couple methods “Picture_Inserting” and “Picture_Updating” on the ApplicationDataService class. (We could also support Delete, but I’ll leave that as an exercise for the reader – or a follow up post :-)).
On insert, first we need to create a new Guid and use that in the URL. Then we can save to storage. On update, we’re just checking if the value has changed before saving to storage.
VB:
Private Sub Pictures_Inserting(entity As Picture) 'This guid becomes the name of the blob (image) in storage Dim guid = System.Guid.NewGuid.ToString() entity.ImageGuid = guid 'We use this to display the image in our app. entity.ImageUrl = BlobContainerURL + guid 'Save actual picture to blob storage SaveImageToBlob(entity) End Sub Private Sub Pictures_Updating(entity As Picture) 'Save actual picture to blob storage only if it's changed Dim prop As Microsoft.LightSwitch.Details.IEntityStorageProperty =
entity.Details.Properties("Image")
If Not Object.Equals(prop.Value, prop.OriginalValue) Then SaveImageToBlob(entity) End If End Sub
C#:
partial void Pictures_Inserting(Picture entity) { //This guid becomes the name of the blob (image) in storage string guid = System.Guid.NewGuid().ToString(); entity.ImageGuid = guid; //We use this to display the image in our app. entity.ImageUrl = BlobContainerURL + guid; //Save actual picture to blob storage SaveImageToBlob(entity); } partial void Pictures_Updating(Picture entity) { //Save actual picture to blob storage only if it's changed Microsoft.LightSwitch.Details.IEntityStorageProperty prop = (Microsoft.LightSwitch.Details.IEntityStorageProperty)entity.Details.Properties["Image"]; if(!Object.Equals(prop.Value, prop.OriginalValue)){ SaveImageToBlob(entity); } }
Working with Azure Blob Storage
The article I referenced above walks you through working with blob storage but here’s the basics of how we can create a new container if it doesn’t exist, and save the bytes. First, include the Azure storage references at the top of the ApplicationDataService file.
VB:
Imports Microsoft.WindowsAzure.Storage Imports Microsoft.WindowsAzure.Storage.Blob
C#:
using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Blob;
Next, get the connection string, the container name, and the base URL to the container. You can read these from your web.config, but for this example I am just hardcoding them in some constants on the ApplicationDataService class.
VB:
'TODO put in configuration file Const BlobConnStr = "DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=*****" Const BlobContainerName = "pics" Const BlobContainerURL =
C#:
const string BlobConnStr = "DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=****"; const string BlobContainerName = "pics"; const string BlobContainerURL = "";
Next create a static constructor to check if the container exists and if not, create it. It will also set the container’s blobs so they are publically accessible via direct URLs. This will only run once when the web application is first started (or restarted).
VB:
Shared Sub New() 'Get our blob storage account Dim storageAccount As CloudStorageAccount = CloudStorageAccount.Parse(BlobConnStr) 'Create the blob client. Dim blobClient As CloudBlobClient = storageAccount.CreateCloudBlobClient() 'Retrieve reference to blob container. Create if it doesn't exist. Dim blobContainer = blobClient.GetContainerReference(BlobContainerName) If Not blobContainer.Exists Then blobContainer.Create() 'Set public access to the blobs in the container so we can use the picture
' URLs in the HTML client. blobContainer.SetPermissions(New BlobContainerPermissions With {.PublicAccess = BlobContainerPublicAccessType.Blob}) End If End Sub
C#:
static ApplicationDataService() { //Get our blob storage account CloudStorageAccount storageAccount = CloudStorageAccount.Parse(BlobConnStr); //Create the blob client. CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); //Retrieve reference to blob container. Create if it doesn't exist. CloudBlobContainer blobContainer = blobClient.GetContainerReference(BlobContainerName); if(!blobContainer.Exists()) { blobContainer.Create(); //Set public access to the blobs in the container so we can use the picture
// URLs in the HTML client. blobContainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob }); } }
Now that we have created the container on our storage account, the saving is pretty easy. Notice the “trick” here. After we save the image to storage, set the Image property to null so that it isn’t saved into the database.
VB:
Private Sub SaveImageToBlob(p As Picture) Dim blobContainer = GetBlobContainer() 'Get reference to the picture blob or create if not exists. Dim blockBlob As CloudBlockBlob = blobContainer.GetBlockBlobReference(p.ImageGuid) 'Save to storage blockBlob.UploadFromByteArray(p.Image, 0, p.Image.Length) 'Now that it's saved to storage, set the Image database property null p.Image = Nothing End Sub Private Function GetBlobContainer() As CloudBlobContainer 'Get our blob storage account Dim storageAccount As CloudStorageAccount = CloudStorageAccount.Parse(BlobConnStr) 'Create the blob client Dim blobClient As CloudBlobClient = storageAccount.CreateCloudBlobClient() 'Retrieve reference to a previously created container. Return blobClient.GetContainerReference(BlobContainerName) End Function
C#:
private void SaveImageToBlob(Picture p) { CloudBlobContainer blobContainer = GetBlobContainer(); //Get reference to the picture blob or create if not exists. CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference(p.ImageGuid); //Save to storage blockBlob.UploadFromByteArray(p.Image, 0, p.Image.Length); //Now that it's saved to storage, set the Image database property null p.Image = null; } private CloudBlobContainer GetBlobContainer() { //Get our blob storage account CloudStorageAccount storageAccount = CloudStorageAccount.Parse(BlobConnStr); //Create the blob client CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); //Retrieve reference to a previously created container. return blobClient.GetContainerReference(BlobContainerName); }
That’s all here is to it. Run the application and upload a picture.
Then go back to your Azure Management Portal and inspect your storage account. You will see the pictures in the storage container. And if you check your database, the Image will be null.
Desktop Client Considerations: Reading from Blob Storage from the LightSwitch Query Pipeline
If you want this to work with the Desktop (Silverlight) client, you’ll need to retrieve the Image bytes from storage into the Image property directly. This is because the built-in LightSwitch control for this client works with bytes, not URLs. You can use the same code to save the images above, but you’ll also need to read from blob storage anytime a Picture entity is queried from the database by tapping into the query pipeline. Here’s some code you can use in the same ApplicationDataService class.
VB:
Private Sub Query_Executed(queryDescriptor As QueryExecutedDescriptor) 'This would be necessary if using the Silverlight client. If queryDescriptor.SourceElementType.Name = "Picture" Then For Each p As Picture In queryDescriptor.Results ReadImageFromBlob(p) Next End If End Sub Private Sub ReadImageFromBlob(p As Picture) 'Retrieve reference to the picture blob named after it's Guid. If p.ImageGuid IsNot Nothing Then Dim blobContainer = GetBlobContainer() Dim blockBlob As CloudBlockBlob = blobContainer.GetBlockBlobReference(p.ImageGuid) If blockBlob.Exists Then Dim buffer(blockBlob.StreamWriteSizeInBytes - 1) As Byte blockBlob.DownloadToByteArray(buffer, 0) p.Image = buffer End If End If End Sub
C#:
partial void Query_Executed(QueryExecutedDescriptor queryDescriptor) { //This would be necessary if using the Silverlight client. if(queryDescriptor.SourceElementType.Name == "Picture") { foreach (Picture p in (IEnumerable<Picture>)queryDescriptor.Results) { ReadImageFromBlob(p); } } } private void ReadImageFromBlob(Picture p) { //Retrieve reference to the picture blob named after it's Guid. if (p.ImageGuid != null) { CloudBlobContainer blobContainer = GetBlobContainer(); CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference(p.ImageGuid); if (blockBlob.Exists()) { byte[] buffer = new byte[blockBlob.StreamWriteSizeInBytes - 1]; blockBlob.DownloadToByteArray(buffer, 0); p.Image = buffer; } } }
Wrap Up
Working with Azure Blob Storage is really easy. Have fun adapting this code to suit your needs, making it more robust with error logging, etc. I encourage you to play with storing other types of data as well. It makes a lot of sense to utilize Azure blob storage, particularly if your LightSwitch app is already being hosted in Azure.
Enjoy!
Join the conversationAdd Comment
Hi Beth
Nice Post. But instead of using Public access permission for blobs, Is it possible to view the images on demand by using the blob Shared Access Signature.
I think public blobs access may be a security leak for the application.
Regards
@babloo1436 – The client in this case is the .NET server so it's probably easier to just use the AccessKey and read the images from storage via the middle-tier query interceptor.
FWIW though, that's why I just made the blobs themselves public and not the container. Trying to guess at GUIDs is pretty tough so this solution may suit some situations just fine and it avoids the read in the middle-tier.
Hi Beth
Really nice. Would you look at returning the selected file URL and Type as well. I can see its in the Data at the end as 'type=' but no source URL?
I have successfully used this to download any file Type to the local Db as well and its AWESOME. .xlsx .docx .pdf and I need Type to unpack it from there.
In fact I would have thought this to be a tutorial for any file which is what we need in this great product Lightswitch HTML.
I need to customize the UI of my app, is there any guide that give more than just changing font or background colors? I'm seriously considering using LS as a BackEnd solution and Bootstrap as a FrontEnd… I don't know why I can't have other Libraries instead of JQuery Mobile… it just does not work for me since I use the Silverlight UI for my App…. JQuery Mobile is just too big for my forms…
Thanks Beth for sharing.. hope you can read and comment my sugestion.
[email protected], [email protected]
Hi Beth,
This is how we can upload the image in the blob. If I want to upload any other format of file like .pdf, .txt then what should we have to change in this code?
Thanks.
I would like to retain the original file name and extension as a field in the database. How can you pass that from the file uploader to the database? I'm sure there is a way but I'm just not seeing it clearly.
Hi,
I have my LS desktop application already hosted in Azure but I need to save two files to a Blob storage (.xml (Stringbuilder) and .jpg (binary field)) when the users execute a private sub button for use these files from an external application. Is any way to do that from the client?
Hi Beth
Nice – I'm using the idea for all sorts of files in Silverlight Client but as size increase, update time is increasing as well.
How can I change the code so I only execute ReadImageFromBlob on the selected item instead of the complete table (Query_executed)?
Kind regards
I already have the images stored in Azure blob. I need to Display images in Datagrid row from Azure storage. Images name stored in Database. I am using LightSwitch Desktop application with Visual Studio 2013. Could you please guide how to show the images into Grid column.
Thanks Bath, Nice article.
Now I am able to show the image inside DataGrid of Desktop Application. Can you guide me how to open the large image on click of small image on Grid Column.
Hi, Beth
on Pictures_Updating Orignanl value is null for Image. Can you give me the idea
Hello All,
Nice post Champion,
I can see some of the comment has said they can use this to the LightSwitch desktop client, but i am not able to figure out how i can use this in desktop client.
So Can Beth/Anyone else can show me a step by step configuration to add different type of files to blob and download the file to browser or client machine on button click.?
Kindly reply to this comment.
Thank you.
Regards,
Nirav
Beth, thank you much!
I'm confused on where to place the static constructor. If we have the Solution Explorer open, which file to we open and place that code? I understand where to put the Pictures_Inserting and Pictures_Updating code, but I'm a little lost after that.
Hi,
It worked but just save buttom doesnt work .is a part code need for fix it?
thx | https://blogs.msdn.microsoft.com/bethmassi/2014/05/01/storing-images-in-azure-blob-storage-in-a-lightswitch-application/ | CC-MAIN-2016-30 | refinedweb | 2,942 | 57.47 |
This example shows how to generate code that exchanges data with external, existing code. Construct and configure a model to match data types with the external code and to avoid duplicating type definitions and memory allocation (definition of global variables). Then, compile the generated code together with the external code into a single application.
Create the file
ex_cc_algorithm.c in your
current folder.
#include "ex_cc_algorithm.h" inSigs_T inSigs; float_32 my_alg(void) { if (inSigs.err == TMP_HI) { return 27.5; } else if (inSigs.err == TMP_LO) { return inSigs.sig1 * calPrms.cal3; } else { return inSigs.sig2 * calPrms.cal3; } }
The C code defines a global structure variable named
inSigs. The code also
defines a function,
my_alg, that uses
inSigs
and another structure variable named
calPrms.
Create the file
ex_cc_algorithm.h in your
current folder.
#ifndef ex_cc_algorithm_h #define ex_cc_algorithm_h typedef float float_32; typedef enum { TMP_HI = 0, TMP_LO, NORM, } err_T; typedef struct inSigs_tag { err_T err; float_32 sig1; float_32 sig2; } inSigs_T; typedef struct calPrms_tag { float_32 cal1; float_32 cal2; float_32 cal3; } calPrms_T; extern calPrms_T calPrms; extern inSigs_T inSigs; float_32 my_alg(void); #endif
The file defines
float_32 as an alias of the C data type
float. The file also defines an enumerated data type,
err_T, and two structure types,
inSigs_T
and
calPrms_T.
The function
my_alg is designed to calculate
a return value by using the fields of
inSigs and
calPrms,
which are global structure variables of the types
inSigs_T and
calPrms_T.
The function requires another algorithm to supply the signal data
that
inSigs stores.
This code allocates memory for
inSigs, but not for
calPrms. Create a model whose generated code:
Defines and initializes
calPrms.
Calculates values for the fields of
inSigs.
Reuses the type definitions (such as
err_T and
float_32)
that the external code defines.
So that you can create enumerated and structured data in the
Simulink® model, first create Simulink representations of the data types that the external code
defines. Store the Simulink types in a new data dictionary named
ex_cc_integ.sldd.
Simulink.importExternalCTypes('ex_cc_algorithm.h',... 'DataDictionary','ex_cc_integ.sldd');
The data dictionary appears in your current folder.
To inspect the dictionary contents in the Model Explorer, in your
current folder, double-click the file,
ex_cc_integ.sldd.
The
Simulink.importExternalCTypes function
creates
Simulink.Bus,
Simulink.AliasType, and
Simulink.data.dictionary.EnumTypeDefinition
objects that correspond to the custom C data types from
ex_cc_algorithm.h.
Create a new model and save it in your current folder as
ex_struct_enum_integ.
Link the model to the data dictionary. On the Modeling tab, under Design, click Data Dictionary.
Add algorithmic blocks that calculate the fields of
inSigs.
Now that you have the algorithm model, you must:
Organize the output signals into a structure variable
named
inSigs.
Create the structure variable
calPrms.
Include
ex_cc_algorithm.c in the
build process that compiles the code after code generation.
Add a Bus Creator block near the existing Outport blocks. The output of a Bus Creator block is a bus signal, which you can configure to appear in the generated code as a structure.
In the Bus Creator block, set these parameters:
Number of inputs to
3
Output data type to
Bus:
inSigs_T
Output as nonvirtual bus to selected
Delete the three existing Outport blocks (but not the signals that enter the blocks).
Connect the three remaining signal lines to the inputs of the Bus Creator block.
Add an Outport block after the Bus Creator block. Connect the output of the Bus Creator to the Outport.
In the Outport block, set the Data
type parameter to
Bus:
inSigs_T.
On the Modeling tab, click Model Data Editor.
On the Inports/Outports tab, for the
Inport blocks labeled
In2 and
In3, change Data Type from
Inherit: auto to
float_32.
Change the Change View drop-down list from
Design to
Code.
For the Outport block, set Signal
Name to
inSigs.
Set Storage Class to
ImportFromFile.
Set Header File to
ex_cc_algorithm.h.
Inspect the Signals tab.
In the model, select the output signal of the Multiport Switch block.
In the Model Data Editor, for the selected signal, set
Name to
err.
Set the name of the output signal of the Gain block to
sig1.
Set the name of the output signal of the Gain1 block to
sig2.
When you finish, the model stores output signal data (such as
the signals
err and
sig1) in
the fields of a structure variable named
inSigs.
Because you set Storage Class to
ImportFromFile,
the generated code does not allocate memory for
inSigs.
Configure the generated code to define the global structure variable,
calPrms, that the external code needs.
In the Model Explorer Model Hierarchy pane, under the dictionary node ex_cc_integ, select the Design Data node.
In the Contents pane, select the
Simulink.Bus object
calPrms_T.
In the Dialog pane (the right pane), click Launch Bus Editor.
In the Bus Editor, in the left pane, select
calPrms_T.
On the Bus Editor toolbar, click the Create/Edit a Simulink.Parameter Object from a Bus Object button.
In the MATLAB Editor, copy the generated MATLAB code and run the code
at the command prompt. The code creates a
Simulink.Parameter object in the base workspace.
In the Model Explorer Model Hierarchy pane, select Base Workspace.
Use the Model Explorer to move the parameter object,
calPrms_T_Param, from the base workspace to the
Design Data section of the data dictionary.
With the data dictionary selected, in the
Contents pane, rename the parameter object as
calPrms.
In the Model Data Editor, select the Parameters tab.
Set the Change view drop-down list to
Design.
For the Gain block, replace the value
13.8900013 with
calPrms.cal1.
In the other Gain block, use
calPrms.cal2.
While editing the value of the other Gain block, next
to
calPrms.cal2, click the action button
and select calPrms > Open.
In the
calPrms property dialog box, next to the
Value box, click the action button
and select Open Variable
Editor.
Use the Variable Editor to set the field values in the parameter object.
For the fields
cal1 and
cal2, use the numeric values that the
Gain blocks in the model previously
used.
For
cal3, use a nonzero number such as
15.2299995.
When you finish, close the Variable Editor.
In the property dialog box, set Storage class to
ExportedGlobal. Click
OK.
Use the Model Explorer to save the changes that you made to the dictionary.
Configure the model to include
ex_cc_algorithm.c in
the build process. Set Configuration Parameters > Code Generation > Custom Code > Additional build information > Source files to
ex_cc_algorithm.c.
Generate code from the model.
Inspect the generated file
ex_struct_enum_integ.c.
The file defines and initializes
calPrms.
/* Exported block parameters */ calPrms_T calPrms = { 13.8900013F, 0.998300076F, 15.23F } ; /* Variable: calPrms
The generated algorithm in the model
step function
defines a local variable for buffering the value of the signal
err.
err_T rtb_err;
The algorithm then calculates and stores data in the fields of
inSig.
inSigs.err = rtb_err; inSigs.sig1 = (rtU.In2 + rtDW.DiscreteTimeIntegrator_DSTATE) * calPrms.cal1; inSigs.sig2 = (real32_T)(calPrms.cal2 * rtDW.DiscreteTimeIntegrator_DSTATE);
To generate code that uses
float_32 instead
of the default,
real32_T, instead of manually specifying
the data types of block output signals and bus elements, you can use
data type replacement (Configuration Parameters > Code Generation > Data Type Replacement). For more information, see Replace and Rename Data Types to Conform to Coding Standards.
Simulink.importExternalCTypes | https://au.mathworks.com/help/ecoder/ug/exchange-structured-and-enumerated-data-between-generated-and-external-code.html | CC-MAIN-2021-21 | refinedweb | 1,207 | 50.84 |
.
Hey!
Read all your tutorials, and they are great.
One question here though.
Instead of using the following:
#ifndef
#define
#endif
Is it OK to use #pragma once?
Dont they do the exact same thing? If not, whats the difference?
I heard somewhere that #pragma once i OS-spesific, and if you wanna make portable code, you should use #ifndef.........?
Sincerely
I discuss #pragma once in the next lesson. Short answer, you're better off using explicit header guards even though #pragma once will probably work in most places.
My dear c++ Teacher,
Please let me following question:
In subsection "Conditional compilation" 2nd paragraph, by "value" I understand "identifier", e.g. PRINT_JOE, PRINT_BOB. Is it correct?
With regards and friendship.
Yes.
I am getting an error message like this.Can you please help me out with this?
You have code that lives outside of a function. This isn't allowed. Move the #ifdefs and associated code inside main(), or put them inside another function that you're calling from main().
Okay Thanks Alex for helping me out on this issue.
My dear c++ Teacher,
Please let me say you that example
implies suggestion to use japanese money!
also that by chance I found following program works and outputs 9.
With regards and friendship.
In "scope of defines" section, I was wondering is there any specific reason for you to put #include<iostream> in funcition.cpp not main.cpp?
And I wrote same coding as mentioned above, but my computer did not print out "printing". I am using CLion instead of Visual Studio and just wondering that cout works differently from editors.
Each code file should include all of the headers that contain declarations for the functionality it is using.
function.cpp uses std::cout and operator<<, therefore it needs to include iostream itself. If you omit it, the compiler will probably complain it doesn't know what std::cout is.
Each code file is compiled separately, and the compiler doesn't remember anything from the previous file. So if you had included iostream in main.cpp and not in function.cpp, the compiler would not remember that you had done so.
std::cout should work fine in clion. Perhaps your output window is either closing immediately, or going to a different window than the one you're looking at?
Hey Alex! You said that "almost anything [function-like macros] can do can be done by an (inline) function.". Then what is something a macro can do but a function can't?
The only good use case I can think of for using a function-like macro over a normal function is for implementing asserts. If you're not familiar with asserts, I cover them here:
Asserts typically display the code file and line of code causing the assert. This can't be done via a normal function, because calling a function changes the line of code being executed (and maybe the file). And it can't be done via an inline function because inline functions don't have access to the line of code or name of the file. But it can be done via the preprocessor.
HEY ALEX!
HERE WHAT I NEED TO
KNOW:;
WHAT IS THIS
g++ -o main -I /source/includes main.cpp
AND WHERE TO PLACE IT WHAT IT DO”’
‘
‘
‘AND A QUESTION ARE YOU A SOFTWARE ENGINEER?
AND HOW LONG YOU TAKE TO LEARN C++?
PLEASE ONE MORE ? YOU ALSO LEARN C++ ONLINE?
PLEASE ANSWER IT
WITH DUE RESPECT TO YOU:]
> g++ -o main -I /source/includes main.cpp
This tells g++ to compile main.cpp into a program called main, from the command line. If you're using an IDE, you don't need to know this.
> AND A QUESTION ARE YOU A SOFTWARE ENGINEER?
Not any more.
> AND HOW LONG YOU TAKE TO LEARN C++? PLEASE ONE MORE ? YOU ALSO LEARN C++ ONLINE?
I dunno, I learned it a long time ago, before they had online courses.
THANK YOU VERY
MUCH TO LET ME KNOW ABOUT
YOUR EXPERIENCE WITH C++?
;
;
AND FINALLY YOU ARE MY GREAT
TEACHER.
HEY ALEX!
AM A NEW LEARNER OF THIS LANGUAGE:
HERE WHAT I WANT TO KNOW
WHAT IS DIRECTORY AND WHAT ITS FUNCTION!!
CAN YOU PLEASE REPLY WITH A EXAMPLE ;
ITS VERY IMPORTANT
FOR ME.............;)
A directory is a folder that holds files, that exists as a part of the operating system file system. If you're not familiar with the role of files and directories as part of the operating system, I'd suggest doing some separate reading on those topics.
wher i can learn more about it?
wikipedia is always a good starting point. You can also try google searching for "file system tutorial".
thanks my great teacher
where to put this code? on the header file ??
What code?
I have a promblem on a last example (The scope of defines) , I am getting this log in an output field:
1>------ Build started: Project: 0110, Configuration: Debug Win32 ------
1>Compiling...
1>stdafx.cpp
1>Compiling...
1>0110.cpp
1>c:\users\agencija\documents\visual studio 2005\projects\0110\0110\0110.cpp(11) : error C3861: 'doSomething': identifier not found
1>function.cpp
1>c:\users\agencija\documents\visual studio 2005\projects\0110\0110\function.cpp(13) : fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source?
1>Generating Code...
1>Build log was saved at ":\Users\Agencija\Documents\Visual Studio 2005\Projects\0110\0110\Debug\BuildLog.htm"
1>0110 - 2 error(s), 0 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
What am I doing wrong?
In 0110.cpp, do you have a forward declaration for function doSomething?
In function.cpp, it looks like #include "stdafx.h" is either missing or isn't the first line of the file.
thanks a lot!
First of all, thank you for this site! You make the world a better place.
Now, the boring stuff. This can be ignored by experienced programmers. I was trying to get fancy and created a function called input() that would ask the user for a number and then return that number back to the caller. Then I put this function on a separate file, just as we did with the add() function. To my surprise, the compiler complained about std::cin and std::cout from the input() function despite the fact that I had #include <iostream> in the main.cpp file. I thought the main.cpp file gets compiled first :) Of, course, I put another #include <iostream> in the input.cpp file and everything worked out fine. But I was wondering, isn't it a waste to use #include <iostream> multiple times? Then I learned about header guards and with a little Google-Fu I even found out about #pragma once, a directive that prevents <iostream> from being included multiple times.
A few things:
1) The C++ compiles each file individually. It does not remember anything from files it's previously compiled. So if you use std::cout in a given file, that file needs to include iostream.
2) Header guards do _not_ prevent a header from being included into multiple different files. They prevent a header from being included more than once into the same file.
In you example of print, sometimes it really changes the desicision like:
Alex: Hi, not sure if you still maintain this site. When discussing the preprocessor, instead of saying that it makes changes to the text of a file, I think, isn't it more accurate to say that the preprocessor creates a new temporary file where the contents of the original source file has changes from the preprocessor merged into this 'working copy', and then the 'working copy' is what actually gets compiled by the compiler, not the original source file?
The distinction is important because none of the original files (either .cpp or .h files) are ever actually changed. They will be exactly the same after compilation as before, and so this might confuse new programmers.
Yeah, I can see how this might have been unclear. I've updated the lesson text to explicitly indicate that the original code files are not modified. Thanks for the feedback.
#include "stdafx.h"
#include <iostream>
#define anything "hello the world"
void hi()
{
std::cout << "say hi to me" << std::endl;
}
int main()
{
#ifdef anything
std::cout << anything << std::endl;
#endif // anything
#ifndef myname
#define myname
#endif
#ifdef myname
std::cout << "I am Jay " << std::endl;
#endif
#ifndef sayhi
hi();
#endif
return 0;
}
basically, I understand this lesson.
I am just wondering whether we are going to have more use of #defines in future lessons. Can't wait to read more.
In C++, defines are mostly used for header guards and conditional compilation. I talk about both of these shortly. Beyond that, most of what the preprocessor can do isn't used, because there are better, safer ways to do the same things.
My dear c++ Teacher,
Please let me say my problem with
I save (by Ctrl-s) code of function.cpp and give this name, but when run main.cpp, output is:
/home/WBEnb0/cc58Mmhf.o: In function `main':
prog.cpp:(.text.startup+0x12): undefined reference to `doSomething()'
collect2: error: ld returned 1 exit status
What is going wrong?
With regards and friendship.
It looks like codechef may be treating each tab as a separate program, rather than as separate files to be compiled into a single program. If that's actually the case, then it's not suitable for compiling programs that use multiple files.
My dear c++ Teacher,
Please let me say that term "preprocessor" is confusing with integrated circuit "processor". So I think term "precompiler" is better fit for it.
With regards and friendship.
Agreed, but they didn't ask me for my opinion on what it should be called. :)
My dear c++ Teacher,
Please let me be glad for you agree with that I said!
With regards and friendship.
My dear c++ Teacher,
Please let me point out that in example:
color of first and second comments is same to directives. Is it mean something?
With regards and friendship.
The preprocessor line comments being brown instead of green is due to a limitation of the syntax highlighter used on this website.
All comments in the source code are removed before the preprocessor runs, so there's no difference between a comment on a preprocessor line or elsewhere.
it worked ...
i was trying to compile the same code as your's but it is not working.....it shows this error
g++ -std=c++11 -o main *.cpp
main.cpp:4:10: error: 'cout' in namespace 'std' does not name a type
std::cout << FOO; // This FOO gets replaced with 9 because it's part of the normal code
^
sh-4.3
#define FOO 9 // Here's a macro substitution
int main()
{
#ifdef FOO // This FOO does not get replaced because it’s part of another preprocessor directive
std::cout << FOO; // This FOO gets replaced with 9 because it's part of the normal code
#endif
}
It looks like you forgot to #include the iostream header.
"When the preprocessor encounters this directive, any further occurrence of ‘identifier’ is replaced by ‘substitution_text’ (excluding use in other preprocessor directives)."
What do you mean with "(excluding use in other preprocessor directives)" ??????
And: "when the identifier is encountered by the preprocessor (outside of another preprocessor directive), it is removed and replaced by nothing!"
What do you mean with "(outside of another preprocessor directive)" ??????
Preprocessor directives don't affect other preprocessor directives. So, for example, if you had the following:
For normal code, any occurrence of FOO (such as the one being std::cout above) would be replaced by 9. However, other preprocessor statements are immune to this text substitution, so the FOO in #ifdef FOO is not replaced.
I've added this bit to the lesson, since it might be helpful for other readers.
Thank you Alex.
Thank you so much Alex, I have learned a lot from your tutorials
The best tutorials I have come across so far...
My question is do you have similar tutorials for any other language like java or python?
If yes,please reply and let me know.
Sorry, I don't.
hi
thanks alex
i have reached here since 5 days ago (going a bit slow but i test and learn every bit of it) :)
one question i have is:
1_i heard by some that C++ is the mother language and you can learn others very fast if you learn C++..true??
2_i have also seen an extension for C++ named Xamarin (spelled correctly?) that makes you to make android apps and im going to buy it but i wonder that how it works...you code in C++ and it converts them to java?? or a java support for C++??
Many languages that came after C++ use a similar syntax to C++, so once you know C++, it is a lot easier to learn other languages. I don't know if I'd say you can learn them fast, but certainly a lot faster than trying to learn them from scratch.
I've never heard of Xamarin before, so I can't comment on how it works.
I guess I can't reply to a specific comment without an account? Anyway, in reply to the immediately above:
That makes more sense. To be fair, nowhere else I looked mentioned that caveat either. There's one more thing that isn't clear now, though; from the section on object-like macros without substitution text, I got the impression that the fact that they replace later appearances of their identifiers with nothing is what makes them useful for conditional compilation, but since that mechanic doesn't affect other preprocessor directives, and the only place the macro's identifier necessarily reappears in that context is inside other preprocessor directives, it seems that it doesn't in fact come into play at all there. Is that right?
There are no accounts on this site. All functionality is available to anonymous users. Just click the reply link next to the comment you want to reply to.
Object-like macros without substitution text are pretty much never used outside of conditional compilation directives (which are exempt from the substitution effects).
Huh, I could have sworn the reply buttons weren't there last time. They probably were there, but I did look.
Does that mean that one could also use an object-like macro with substitution text for conditional compilation (even though the substitution text would be pointless) and it would work just the same?
Thanks for the clarifications! This is a very helpful site.
I'd never thought to try it before. So I just tried it with Visual Studio 2015, and it does work (though as you note, the substitution text isn't used in this case).
You say an object-like macro without substitution text replaces each subsequent appearance of its identifier with nothing. I understand this to mean that in the example under "conditional compilation," the preprocessor changes line 3 from
to
. That, in turn, suggests that the specific circumstance under which code after #ifdef is compiled is when there is no identifier after the #ifdef. In that case, I would expect it to also be possible to get rid of the identifier myself, so that the code
would cause the code after the #ifdef to be compiled, but it seems that having an #ifdef without an identifier in the first place isn't allowed. Is the preprocessor allowed to cause it to happen, but I'm not?
No. I realize I didn't mention in the lesson that object-like macros don't affect other preprocessor directives. I've updated the lesson to mention that.
So when you #define PRINT_JOE, any occurrences of PRINT_JOE outside of preprocessor directives are removed, but any occurrences of PRINT_JOE inside of preprocessor directives (such as #ifdef PRINT_JOE) are left alone.
If this weren't the case, #ifdef would be useless.
hey,
I'm using visual studio 2015 and had some problems with this code.
Even if I copy pasted it in, it would give 2 errors.
Could it be that you just have to add ";" after "printing!" and "not printing!" in the function.cpp?
thnx
Yep, typo. Thanks for pointing that out. I've fixed the examples.
Thnx,
Just glad I found the solution myself :)
Great guide btw!
void doSomething()
{
#ifdef PRINT
std::cout << "Printing!"
#endif
#ifndef PRINT
std::cout << "Not printing!"
#endif
}
void doSomething(); // forward declaration for function doSomething()
int main()
{
#define PRINT
doSomething();
If directives defined in one code can't be used by other then why didn't we include iostream directive in the main.cpp as we did in function.cpp?
main.cpp doesn't use anything in the iostream header, so it doesn't need to be included.
However, if main.cpp did print something using std::cout, it would need to #include iostream itself.
HEY guys, i am unable to get the output for this code. What is wrong? Pls explain.
Codeblocks in main.cpp
[#include<iostream>
#define PRINT_JOE
int main()
{
#ifdef PRINT_JOE
std::cout << "Joe" << endl;
#endif
#ifdef PRINT_BOB
std::cout << "Bob" << endl;
#endif
return 0;
}]
I am getting this error
||=== Build: Debug in Chapter1.10
(compiler: GNU GCC Compiler) ===|
s Folder\C++ Programs Practice\Chapter1.10\main.cpp|| In function 'int main()':|
s Folder\C++ Programs Practice\Chapter1.10\main.cpp|8| error: 'endl' was not declared in this scope|
s Folder\C++ Programs Practice\Chapter1.10\main.cpp|8| note: suggested alternative:|
C:\Program Files (x86)\CodeBlocks\MinGW\lib\gcc\
mingw32\4.9.2\include\c++\ostream|564| note: 'std::endl'|
||=== Build failed: 1 error(s), 0 warning(s) (0 minute(s), 0 second(s)) ===|
endl should be std::endl. I've fixed the examples.
Hi guys . why we #define ADD_H in the following code ? this code is about previous examples . I deleted #define ADD_H and my codes still work , Can you explain exactly what happend when we invoke ( include "add.h") ?
This is answered in the very next lesson 1.10a -- Header guards.
Is there a way to set a variable that is available within every function? For example, say I wanted to load a variable from a .txt file and have that variable used within a function that uses that number to produce a new digit? This is basically what I have so far.
Many File Program.cpp
test.txt
Yes, global variables are available to serve this purpose, though most programmers would argue you shouldn't use them unless necessary. We cover global variables and the reasons they are problematic in chapter 4.
Name (required)
Website | https://www.learncpp.com/cpp-tutorial/introduction-to-the-preprocessor/comment-page-2/ | CC-MAIN-2019-13 | refinedweb | 3,137 | 66.33 |
End of preview. Expand
in Dataset Viewer.
Dataset Card for "code-tutorials-en"
en
only- 100 words or more
- reading ease of 50 or more
DatasetDict({
train: Dataset({
features: ['text', 'url', 'dump', 'source', 'word_count', 'flesch_reading_ease'],
num_rows: 223162
})
validation: Dataset({
features: ['text', 'url', 'dump', 'source', 'word_count', 'flesch_reading_ease'],
num_rows: 5873
})
test: Dataset({
features: ['text', 'url', 'dump', 'source', 'word_count', 'flesch_reading_ease'],
num_rows: 5873
})
})
- Downloads last month
- 46