Grubinst_gui.exe Can`t Run Background Program
This is the favorite of all since apart of sending the process into the background you don't have to worry about the text output dirtying your terminal: nohup command & This not only runs the process in background, also generates a log (called nohup.out in the current directory, if that's not possible, your home directory) and if you close/logout the current shell the process is not killed by preventing the child proccess from recieving the parent signals when killed (ie. Logging out, by SIGHUP to the parent, or closing the current shell). There's other called disown but that's rather a extension of other answers rather that a method in itself: command & # our program is in background disown # now it detached itself of the shell, you can do whatever you want These commands do not allows you to recover easily the process outputs unless.
This is probably what you want mycommand output.log 2&1 & this will start your command, redirecting both stdout and stderr to some output.log which you can specify. If you don't care to store the output at all - you can use /dev/null instead of an actual file. & will execute command in the background so that you can continue inputting commands while it is running. 2&1 redirects stderr to stdout so that all output is caught. Also, when you run a command like this, you should get a confirmation from the kernel similar to this: 2 1234 This means that your process is running in the background and its id is 1234, so you can kill it later if you wish with kill -9 1234. An example with tmux: $ tmux new -d 'longrunningcommand' While the other answers using '&' to background will work, you have to redirect stdout (and stderr!). Without doing that, the output will go straight to your shell, mixing with whatever other output you may have.
Backgrounding will also fail if you're running a long command and log out or get disconnected. The system will kill your job. If you aren't familiar with either screen or tmux, they basically allow you to completely detach from your shell. Instead of backgrounding your program, you background the whole shell. You can then switch back to it later, even from another computer. They both have a ton more features that you may or may not find useful beyond this use case.
Screen is the old tried and true program; tmux is much younger but has learned from screen's past. (For completeness- answered already:) You put a command in the background by adding & after the command: longcommand with arguments redirection & I'm adding this answer to address the other part of your question: There's no real equivalent of the spinner for showing in-progress background commands, but you can see the status of background commands by typing jobs or jobs -l. It'll show you your backgrounded commands, and whether they're running, stopped via signal (e.g., with ^Z), or occasionally stopped because they're waiting for interactive input from you.
You can run a program in the background using &. For example, if you wanted to run yum install XyZ for example, you could run: yum install XyZ & The stdout or output from the program can be redirected using to overwrite a file, or to append to a file. For example, if you wanted to log yum in a file yum.log: yum install XyZ yum.log & Or, if you wanted to add the output to an existing file log: yum install XyZ log & Errors are printed to stderr and not stdout, and can be redirected to a file in the same way, but using 2: yum install XyZ 2 errors yum install XyZ 2 errors If you wanted to redirect both stderr and stdout, you can use &: yum install XyZ & output yum install XyZ & output.
Parameters command The command that will be executed. Output If the output argument is present, then the specified array will be filled with every line of output from the command. Trailing whitespace, such as n, is not included in this array. Note that if the array already contains some elements, exec will append to the end of the array. If you do not want the function to append elements, call on the array before passing it to exec.
Returnvar If the returnvar argument is present along with the output argument, then the return status of the executed command will be written to this variable. Warning When allowing user-supplied data to be passed to this function, use or to ensure that users cannot trick the system into executing arbitrary commands. Note: If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends. Note: On Windows exec will first start cmd.exe to launch the command. If you want to start an external program without starting cmd.exe use with the bypassshell option set. Note: When is enabled, you can only execute files within the.
For practical reasons, it is currently not allowed to have. Components in the path to the executable. I too wrestled with getting a program to run in the background in Windows while the script continues to execute. This method unlike the other solutions allows you to start any program minimized, maximized, or with no window at all.
Llbra@phpbrasil's solution does work but it sometimes produces an unwanted window on the desktop when you really want the task to run hidden. Start Notepad.exe minimized in the background: Run ( 'notepad.exe', 7, false );?
start a shell command invisible in the background: Run ( 'cmd /C dir /S%windir%', 0, false );? start MSPaint maximized and wait for you to close it before continuing the script: Run ( 'mspaint.exe', 3, true );?
For more info on the Run method go to. If you're trying to use exec in a script that uses signal SIGCHLD, (i.e. Pcntlsignal(SIGCHLD,'sigHandler');) it will return -1 as the exit code of the command (although output is correct!). To resolve this remove the signal handler and add it again after exec.
Code will be something like this. Pcntlsignal(SIGCHLD, 'sigHandler'). (more codes, functions, classes, etc). // Now executing the command via exec // Clear the signal pcntlsignal(SIGCHLD, SIGDFL); // Execute the command exec('mycommand',$output,$retval); // Set the signal back to our handler pcntlsignal(SIGCHLD, 'sigHandler'); // At this point we have correct value of $retval. Same solution can apply to system and passthru as well. Took quite some time to figure out the line I am going to post next.
If you want to execute a command in the background without having the script waiting for the result, you can do the following: /path/to/logfile.log 2&1 &' );? There are a few thing that are important here. First of all: put the full path to the php binary, because this command will run under the apache user, and you will probably not have command alias like php set in that user. Seccond: Note 2 things at the end of the command string: the '2&1' and the '&'. The '2&1' is for redirecting errors to the standard IO. And the most important thing is the '&' at the end of the command string, which tells the terminal not to wait for a response.
Third: Make sure you have 777 permissions on the 'logfile.log' file Enojy! From what I've gathered asking around, there is no way to pass back a perl array into a php script using the exec function. The suggestion is to just print out your perl array variables at the end of your script, and then grabbing each array member from the array returned by the exec function.
If you will be passing multiple arrays, or if you need to keep track of array keys as well as values, then as you print each array or hash variable at the end of your perl script, you should concatenate the value with the key and array name, using an underscore, as in: foreach (@array) print '(array name)(memberkey)($)'; Then you would simply iterate through the array returned by the exec function, and split each variable along the underscore. Here I like to especially thank Marat for the knowledge. Hope this is useful to others in search for similar answer! I was trying to get an acceslist from a remote computer by executing cacls and parse it in php, all in a Windows environment with Apache.
First i discovered psexec.exe from Windows SysInternals. But with the following line, I didn´t get anything, it get hunged, although from the command line it worked nice: To make it work I just followed the next steps: - execute services.msc and find the apache service (In my case wampapache) - Right buttonLog On tab and change from Local System Account to a user created account, enter the username and the password and restart the service. (I added this user to the administrators group to avoid permissions problems but its not recommended.) It worked!
And it may work with IIS too so try it if you have the same poblem. Hope this helps someone, and sorry for my english. In Windows, exec issues an internal call to 'cmd /c yourcommand'. This implies that your command must follow the rules imposed by cmd.exe which includes an extra set of quotes around the full command: - Current PHP versions take this into account and add the quotes automatically, but old versions didn't. Apparently, the change was made in PHP/5.3.0 yet not backported to 5.2.x because it's a backwards incompatible change. To sum up: - In PHP/5.2 and older you have to surround the full command plus arguments in double quotes - In PHP/5.3 and greater you don't have to (if you do, your script will break) If you are interested in the internals, this is the source code: sprintf(cmd, '%s /c '%s ', TWG(comspec), command); It can be found at (please find php/php-src/trunk/TSRM/tsrmwin32.c, the comment system doesn't allow the direct link).
When calling exec from within an apache php script, make sure to take care of stdout, stderr and stdin (as in the example below). If you forget this and your shell command produces output the sh and apache deamons may never return (they will normally time out after a few minutes).
From the calling web page the script may seem to not return any data. If you want to start a php process that continues to run independently from apache (with a different parent pid) use nohub. Example: exec('nohup php process.php process.out 2 process.err. If you're having problems with any of the exec, system etc.
Functions not working properly on windows, finding the cause can be very frustrating, as is hard to diagnose. I've found the free Process Monitor from Sysinternals (procmon.exe from live.sysinternals.com) to be VERY helpful here.
You can monitor anything done by e.g. Php.exe, cmd.exe and yourprogram.exe and it will list every access to files, registry etc.
With return codes. You usually find some ACCESS DENIED somehwer in the log, correct the file's permissions and it works. Has saved me a LOT of time. I was having trouble using the PHP exec command to execute any batch file. Executing other commands (i.e., 'dir') works fine). But if I executed a batch file, I receieved no output from the exec command.
The server setup I have consists of Windows Server 2003 server running IIS6 and PHP 5.2.3. On this server, I have: 1. Granted execute permissions to the Internet User on c: windows system32 cmd.exe. Granted Everyone-Full Control to the directory in which the batch file is written. Granted Everyone-Full Control on the entire c: cygwin bin directory and its contents.
Granted the Internet User 'log on as batch' permissions. Specified the full path to each file being executed. Tested these scripts running from the command line on the server and they work just fine. Ensured that%systemroot% system32 is in the system path. It turns out that even with all of the above in place on the server, I had to specify the full path to cmd.exe in the exec call. When I used the call: $output = exec('c: windows system32 cmd.exe /c $batchFileToRun'); then everything worked fine. In my situation, $batchFileToRun was the actual system path to the batch file (i.e., the result of a call to realpath).
Task: Invoke psexec from php to execute a command on a remote computer. Environment: -Windows XP Professional, Service Pack 3 -Apache 2.2 Installed (I used the version bundled in XAMP 1.6.6a) -The executable to be run must be in your system PATH!!
If SAFEMODE is on, and you are trying to run a script in the background by appending ' /dev/null 2 /dev/null & echo $!' To the command line, the browser will hang until the script is done.
My solution: Create a shell script (ex. Runscript.sh) which contains the execution line for the script you are trying to run in the background.
The runscript.sh is run by an exec call without the redirect string, which is now placed in the runscript.sh. Runscript.sh will return almost immediately because output of the original script is redirected, and so will not hang your browser and the script runs fine in the background.
On Windows-Apache-PHP servers there is a problem with using the exec command more than once at the same time. If a script (with the exec command) is loaded more than once by the same user at the same time the server will freeze. In my case the PHP script using the exec command was used as the source of an image tag. More than one image in one HTML made the server stop. The problem is described here toghether with a solution - stop the session before the exec command and start it again after it. If you are using exec on a w2k3 machine with IIS and after a few times the page or IIS just hangs, you should add: ' && exit' to your command.
This way the cmd will always exit. Example: You could also try this when you have problems with the system-function or any other method you use to execute a program! Maybe also useful when using apache, but I'm not sure about that! This is the second time this one got me, I thought someone else might find this note useful too. I am creating a long running exec'd process that I can access with page submissions up to 2 hours later. The problem is this, the first time I access the page everything works like it should. The second time the web browser waits and waits and never gets any messages - the CPU time is not affected so it is apparent that something is blocked.
What is actually happening is that all of the open files are being copied to the exec'd process - including the network connections. So the second time I try to access the web page, I am being given the old http network connection which is now being ignored. The solution is to scan all file handles from 3 on up and close them all. Remember that handles 0, 1, and 2 are standard input, standard output, and standard error. Well, after hours of fighting with output redirection, input redirection, error redirection, sessionwriteclose , blah blah blah, I think I found an easy way to execute a program in background.
I used the following command: ProcClose (ProcOpen ('./command -foo=1 &', Array , $foo)); With the second argument you tell procopen not to open any pipe to the new process, so you don't have to worry about anything. The third argument is only needed because it's not optional. Also, with the '&' operator the program runs in background so the control is returned to PHP as soon as ProcClose is executed (it doesn't have to wait).
In my case I don't use the user session in the executed script (there's no way it can be identified if it is not sended as a cookie or URL) so there's no need for sessionwriteclose (correct me if I'm wrong with this). It worked for me. For Win2k/XP systems, and probably NT as well, after beating my head against a wall for a few days on this, I finally found a foolproof way of backgrounding processes and continuing the script. For some strange reason I just could not seem to get backgrounding to work with either system or exec, or with the wsh.run method for that matter. I have used all three in the past with success, but it just wasn't happening this time. What I finally wound up doing was using psexec from the pstools package found at You would use it like: exec('psexec -d blah.bat'); which will start and immediately background blah.bat and immediately resume running the rest of the php script. I am posting this as I see quite a few people looking to create a web based interface to their MP3 player (XMMS or really what ever you want to call from the command line) on Linux.
Alas, I am not the only one, and I did not think of it first ( suppose I have to result to other get rich quick schemes). And even if there were a direct downloadable utility (as there is XMMS-Control) you, like me, probably want the bowl of porage that is just right. Which means you want to make your own because current versions of X, Y, or Z just don't do what you want. Most of the hard work is at the linux command shell (ugh! - I heard that! Drop and give me 20 command line scripts!) login as root ensure /dev/dsp has the proper access privileges chmod a+rw /dev/dsp add apache to the local list of users for the xserver.
Xhost +local:apache@ You should see the following if apache is not added to the xserver. Xlib: connection to ':0.0' refused by server Xlib: No protocol specified. CRITICAL.: Unable to open display And I am sure that error is as clear to you as it was to me! Only change the following for testing purposes so that you can su to apache from root and test xmms!!!! Temporarily change the line apache:x:48:48:Apache:/var/www:/sbin/nologin To apache:x:48:48:Apache:/var/www:/bin/bash so that you can test out apache access to the Xserver and XMMS. Su apache xmms!!!
Play a file - Don't just read this actually play a file!!! The reason is that if it fails xmms will likely give an error you can track down like a greyhound chases that little bunny at the dog track!
(speaking of get rich quick schemes) And for the grand finale! If you can call xmms from the command line you can likely do the following (unless you are running php in safe mode). Ensure that the wav, mp3, or whatever you decide to test it with is accessible to apache.

I put the file into /var/www/html/icould.wav and chmod a+rw icould.wav At your browser ensure you hit shift+refresh(button) so your browser doesn't give you a cahed the web page. The note regarding how exec works under safemode is wrong. Echo y echo x does not become echo 'y echo x'.
It essentially becomes echo 'y' ' ' 'echo' 'x'. The entire string is passed through escapeshellcmd, which effectively splits arguments at spaces.
Note that this makes it impossible to reliably pass arguments containing spaces to exec in safemode. Since 4.3.3 escapeshellcmd tries to pass quoted arguments through (so you could try exec('cmd 'arg 1' 'arg 2')), but not on win32, and will still wrongly backslash escape any punctuation characters inside the quotes. Not only can the path not contain '.'
Components, but no directory or filename can contain the string '.' /path/to/my.executable will refuse to run. I had a bit of trouble using exec with Windows when passing a parameter to a PHP script file.
I found I needed to create a Windows batch 'wrapper' file to get the script file to accept the optional argument. 1) PHP script file exists: c: www script.php 2) PHP is installed in: c: www php 3) BAT file contains: @c: www php cli php.exe c: www script.php%1 and is saved as: c: www script.bat 4) Use exec statement from PHP: exec('c: www script.bat $arg', $output); If you want to pass more than one argument, carry on with%2,%3,%4. In your BAT file. Hope this helps somebody. I ran into the problem of not being able to set environment variables for my kornshell scripts. In other words, I could not use putenv (at least to my understanding).
The solution I used was to write a script on the fly, then execute that script using exec. Then delete the temp file (I keep it around for debugging.). If you're trying to run something in the background on a system that uses systemd for its init, use the systemd-run utility to start your program in the background. Systemd-run will run the command in a transient unit file so that you can query its status with systemctl and view its log with journalctl. Systemd-run returns immediately, launching your application for you and keeping track of it, so you don't need to worry so much about a forgotten process running on your server. From here you can query its status with And see its log with. ON A LINUX SYSTEM: Note that the exec function calls the program DIRECTLY without any intermediate shell.
Linux Run Background
Compare this with the backtick operator that executes a shell which then calls the program you ask for. This makes the most difference when you are trying to pass complex variables to the program you are trying to execute - the shell can interpret your parameters before passing them thru to the program being called - making a dog's breakfast of your input stream. Remember: exec calls the program DIRECTLY - the backtick (and, I THINK, the system call) execute the program thru a shell. I wanted my script to:. execute an external command.
check if the execution was ok. The return level).
log the error output if the execution wasn't ok. not print the command's output in my script's output.
I saw the exec, system, shellexec and passthru functions, and deduced that the solution was to redirect the standard error (stderr) to the standard output (stdout). It's not very clean, since it mixes stderr with stdout, and I only wanted to log the stderr. But it seems to be the only solution (suggestions are welcome). This is the code I use: &1', $out, $err ); if ( $err ) mylogfunc ( join ( ' n', $out ));?. Side-note: regarding the API comment 'For practical reasons, it is currently not allowed to have.
Components in the path to the executable.' It is still possible to pass the shell as the first command and a relative path to the script as the second, so this limitation/security mechanism is not much of one.
'sh' can be used to run non-scripts with relative paths via something like this: $out = null; $rc= 0; exec('sh -c 'exec /bin/./bin/ls', $out, $rc); printr( $out); echo 'rc = '.$rc.' N'; effectively bypassing any 'security' gained by the '.' You can embed php into a batch file so that it is essentially 'double-click' to run. This might be useful for script that does search and replace in numerous files or other tedious task that are too complex for batch files but don't warrant greater attention. Its really quite simple I'm sure someone has thought of it before.
You can add whatever batch code you want after:START just make sure you exit before you get to./ so Windows doesn't fuss. @php%0 basically is saying 'Open this file with php and run its php code'. Obviously its really only a useful trick on Windows, I only really use it for update scripts on our company's servers. I suppose you could also just set Windows to open php files with php.exe but that seems like a rather stupid thing to do as you would most often want to edit php files, not run them directly.
@pause is optional of course, but you may want to look at what php outputted to the command line before it exits. Example.bat: @GOTO START. I just tried the following: exec('cp '/MySite/web/uploads/'.$matches10.mp4' '/MySite/web/uploads/newfolder/'.$video-getSiteId.' .mp4'); I think exec has an issue with the amount of php commands within the exec - the above doesn't work. To get around this issue put everything into variables before hand so you end up with: $location1 = '/MySite/web/uploads/'.$matches10.mp4'; $location2 = '/MySite/web/uploads/startupvids/mp4/'.$video-getSiteId.' .mp4'; exec('cp '.$location1.' Recently I had to do some 'root' stuff via PHP - I could do it through cronjob based on the database, but since I was too lazy with doing that, I just wrote a shell script to create all the IMAP stuff, chown properly all the Maildirs and provision the user on the local system.
Executing that command as another user from inside of PHP was like this. @exec('echo 'apache' /usr/bin/sudo -u mail -S /var/www/html/mailset.sh $name'); 'Advantage' is that you can run in this way commands as any sudoer on your system which can be fine-tuned and pretty useful. But again - password is echoed in cleartext and option '-S' tells sudo to take password from stdin which he does of course. Major security risk - do it only if you really need to run some commands as different users (nologin users) and if you are really lazy:) Now, my solution is a cronjob which runs every 5 mins and gets datasets from the MySQL table and does the work without exploiting any exec from within PHP - which is the right way to do it. It is possible to only capture the error stream (STERR).
Here it goes. (somecommand par1 par2 /dev/null) 3&1 1&2 2&3 -First STDOUT is redirected to /dev/null.By using parenthesis it is possible to gain control over STERR and STOUT again.Then switch STERR with STOUT.
The switch is using the standard variable switch method - 3 variables (buckets) are required to swap 2 variables with each other. (you have 2 variables and you need to switch the contents - you must bring in a 3rd temporary variable to hold the contents of one value so you can properly swap them). This link gave me this information. Something you guys might find helpful: If you guys start a process or e.g. A script which starts processes sub processes, look that you uncouple them from your php script. Otherwise you get the same problem as I, your php script will wait to get the feedback from the server.
So it will wait until the sub processes are dead. I know two ways to counter that, one way is to make sure the sub processes dont get linked to the parent process. The other way is to fire and forget with php (on Unix/Linux just add '/dev/null 2&1 &' to end of your command, but you should remember you wont get any output from the process.) Hope that will help some of you, I had a while until I came on this idea.
Temple Run Background
Note that on Linux/UNIX/etc, both the command and any parameters you send is visible to anyone with access to ps. For example, take some php code that calls an external program, and sends a sort of password as a parameter. The entire string (actually probably something like 'secureprogram password') will be visible to any user that executes ps -A -F.
If you are going to be doing a lot of external calling, or any of it needs to be somewhat secure, you are probably better off writing a php extension.