Archives for posts with tag: software development

Here are some notes on installing TensorFlow on Fedora with Cuda support. I started with the Nvidia instructions. This install was performed on Fedora 23. I’m using an Nvidia 1060 GTX, so I needed to use CUDA 8.0. I ended up installing for Python 2.7 as that looked like the most straightforward approach for the install.  Using these notes will hopefully save you 1-2 hours… Enjoy! If you need help getting software development done, including machine learning, contact me and the Coderbuddy team.

Get started with these instructions at the Nvidia site, and note my changes below:

Step 1. Install NVIDIA CUDA — Fedora 23 version is available for download from Nvidia

download cuda

sudo dnf clean all
sudo dnf install cuda

Step 2. Install NVIDIA cuDNN

Step 3. Install and Upgrade Pip

sudo dnf install python-devel python-pip
sudo pip2 install --upgrade pip

Step 4. Install Bazel — by compiling from source

sudo dnf install swig

NOTE: software-common-properties not needed

Let’s install Bazel on Fedora by compiling from source!… using notes at:

http://www.bazel.io/versions/master/docs/install.html#compiling-from-source

download *-dist.zip
check gpg signature, sha256 sum

mkdir bazel
mv *-dist.zip  bazel

cd bazel
unzip *.zip

check your java version

java -version

alternatives --config java

You may also need to set JAVA_HOME

echo $JAVA_HOME

To find the needed value/location to use for JAVA_HOME:
ls -al /usr/bin/javac , follow symbolic links (e.g. by doing ls -al on each link )

I ended up with:

 JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64
 bash ./compile.sh

Note: replace ‘user’ with your username

Build successful! Binary is here: /home/user/Downloads/bazel/output/bazel

sudo cp /home/user/Downloads/bazel/output/bazel /usr/local/bin

Step 5. Install TensorFlow

git clone https://github.com/tensorflow/tensorflow
cd tensorflow

Note: DON’T! do git reset –hard

Optional, these may help:
CC=”/usr/bin/gcc”
CXX=”/usr/bin/g++”

 ./configure

NOTES: reply ‘n’ to any ‘build support’ questions here besides GPU support for highest probability of success, you may also see questions on Hadoop and OpenCL support, etc. Also note I had to type in the location for gcc as /usr/bin/gcc, rather than the ccache default. You can double-check location of cuda-8.0 and cudnn5 using find command in /usr/local/cuda and /usr/local/cuda-8.0 (e.g. cd /usr/local/cuda, find . -name *5*). Although you’re using cuDNN 5.1, you only type in 5 here.

Please specify the location of python. [Default is /usr/bin/python]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with GPU support? [y/N] y
GPU support will be enabled for TensorFlow
Please specify which gcc nvcc should use as the host compiler. [Default is /usr/lib64/ccache/gcc]: /usr/bin/gcc
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0
Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-8.0
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 5
Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-8.0]: /usr/local/cuda
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: “3.5,5.2”]: 5.2,6.1
Setting up Cuda include
Setting up Cuda lib64
Setting up Cuda bin
Setting up Cuda nvvm
Setting up CUPTI include
Setting up CUPTI lib64
Configuration finished
This took 1100 seconds (approx. 18 minutes) to run on my system (or else that was one of the other commands before or after it :).

bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
sudo pip2 install wheel
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
sudo pip2 install --upgrade /tmp/tensorflow_pkg/tensorflow-0.*.whl

Step 5. Upgrade protobuf

sudo pip2 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/protobuf-3.0.0b2.post2-cp27-none-linux_x86_64.whl

Step 6. Test your installation

$ cd
$ python
>>> import tensorflow as tf

I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:116] Couldn’t open CUDA library libcudnn.so.5. LD_LIBRARY_PATH: /home/user/torch/install/lib:/home/user/torch/install/lib:/home/user/torch/install/lib:/home/user/torch/install/lib:/home/user/torch/install/lib:
I tensorflow/stream_executor/cuda/cuda_dnn.cc:3459] Unable to load cuDNN DSO
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn’t compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn’t compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn’t compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn’t compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.

>>> sess = tf.Session()
 I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
 I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
 name: GeForce GTX 1060 3GB
 major: 6 minor: 1 memoryClockRate (GHz) 1.7085
 pciBusID 0000:01:00.0
 Total memory: 2.94GiB
 Free memory: 2.28GiB
 I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
 I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 3GB, pci bus id: 0000:01:00.0)

Now you’re in business. As you can see, you can finetune the TensorFlow library compilation to take advantage of additional instructions if you’d like to go back and work on it more. Hope this helps and saves you plenty of time, so you can get onto building applications with TensorFlow!

Adrian Scott is a pioneer of social networking, having founded Ryze. He was also a founding investor in Napster. He currently helps companies build & ship technology and grow their metrics, as CEO of Coderbuddy.

Follow Adrian on Twitter @AdrianScottCom

When you’re founding a startup and you don’t have a senior technical co-founder, what is the default thing to do? Look for a CTO, a chief technology officer.

Guess what there’s a shortage of? Great CTO’s…

But there’s another option.

What if for the same ‘price’ as a CTO, you could get a great part-time Silicon Valley CTO + 2 full-time developers? (Ask me how this works.)

That could actually produce more progress in 1) building out your business and 2) supporting your customer development efforts. You could get more momentum, while being more capital efficient.

With this approach, your CTO can focus on high value-add efforts, like:

– designing the technology architecture

– setting the tech team culture, including coding standards

– prioritizing steps in the development plan to support business needs & schedules

while your developers build out the code.

One of the challenges of the ‘full-time CTO who does everything’ approach is that they face constant context-switching between back-end coding, front-end UI tweaks, architecting and sync-ing w/ the business team and business considerations. Constant context-switching can really reduce productivity. That reduced productivity is often in the critical path of the business, limiting speed and making other members of the management team worried.

When searching for a part-time CTO, there’s also less requirements to filter on, than if you’re looking for one person to do everything, since you’re not looking for them to be the combo CTO/front-end/back-end/mobile developer. That means it’s easier to find one, and you’re faster in getting started and going to market.

By helping build out multiple startups concurrently, a part-time CTO can bring you 5x the expertise in high-value areas like:

– technology architecture

– practical experience w/ different technologies and their development productivity and any potential surprise issues

– best practices for tech team culture

– leveraging usability design, metrics, analytics, A/B testing, etc.

What happens when you scale up the tech team and need more tech management? Do you then switch over to having a full-time CTO?

That’s not your only option. One great approach is to start building out your engineering management, by adding a half or full-time Director of Engineering. Or a VP of Engineering.

But what will investors say? Their main concerns are that:

– The Tech is being built in a high-quality way (Answer: You’ve got a high-quality CTO guiding that)

– Opportunities are not being missed (Answer: They’re handled because the CTO can focus on those opportunities and doesn’t have to context-switch to write CSS code)

– You have a plan to scale up development productivity (Answer: Such as by bringing on a Dir/VP of Engineering in addition to more developers when needed)

When talking with investors, you can address these concerns with the points mentioned above. Then you can talk about how your current plan is giving you higher development productivity by having 2 developers building the tech, and is capital efficient. You also have less dependency on one particular person.

If incentivized suitably with sufficient stock, an experienced CTO can probably help raise an additional amount of funding enough to cover a years’ worth of cash compensation or more. They aren’t going to promise this and it shouldn’t be the reason to bring them on board, but it can be a nice benefit. And in some cases, that additional new investor can be a name investor who strengthens and contributes momentum to the whole fundraising process.

The biggest benefits that an experienced CTO can bring to a startup are in the interplay of business and tech:

  • connecting and ordering tech priorities to business & customer needs, and
  • balancing technical and business goals.

Being on top of these areas means quicker time to market, quicker iteration in customer development, and a higher probability of making it big.

I welcome your thoughts on this.

 

Adrian Scott is a pioneer of social networking, having founded Ryze. He was also a founding investor in Napster. He currently helps companies build & ship technology and grow their metrics, as CEO of Coderbuddy.

Follow him on Twitter

%d bloggers like this: